Merge lp:~leonardr/launchpad/test-representation-cache into lp:launchpad
- test-representation-cache
- Merge into devel
Status: | Merged |
---|---|
Merged at revision: | 11011 |
Proposed branch: | lp:~leonardr/launchpad/test-representation-cache |
Merge into: | lp:launchpad |
Diff against target: |
323 lines (+220/-10) 8 files modified
lib/canonical/database/sqlbase.py (+7/-0) lib/canonical/launchpad/pagetests/webservice/cache.txt (+55/-0) lib/canonical/launchpad/zcml/webservice.zcml (+5/-0) lib/lp/services/memcache/client.py (+3/-1) lib/lp/services/memcache/doc/restful-cache.txt (+102/-0) lib/lp/services/memcache/restful.py (+39/-0) lib/lp/services/memcache/tests/test_doc.py (+8/-8) versions.cfg (+1/-1) |
To merge this branch: | bzr merge lp:~leonardr/launchpad/test-representation-cache |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Stuart Bishop (community) | Approve | ||
Brad Crittenden (community) | code | Approve | |
Review via email: mp+26513@code.launchpad.net |
Commit message
Description of the change
This branch integrates version 0.9.27 of lazr.restful to create a memcached-based cache for JSON representations of entries. In tests with a populated cache, (https:/
lazr.restful takes care of putting representations in the cache and retrieving them at appropriate times. This code takes care of invalidating the cache when a Storm object changes, and turning a Storm object into a unique cache key.
I plan to make one more change to this branch: make it easy to for the LOSAs to turn the cache off if it turns out to cause problems in production. I'm not really sure how to do this, so I'm presenting the branch as is as I work on this other problem.
Stuart Bishop (stub) wrote : | # |
On Tue, Jun 1, 2010 at 11:12 PM, Leonard Richardson
<email address hidden> wrote:
> === modified file 'lib/canonical/
> + def __storm_
> + """Invalidate the web service cache."""
> + cache = getUtility(
> + cache.delete(self)
This looks like a decent way of hooking in. Unfortunatly, SQLBase is
legacy - used by objects providing the SQLObject compatibility layer.
Some of our newer classes use the Storm base class directly. If there
is no way to register an on-flush hook with Storm, you will need to
create a Launchpad specific subclass of storm.Storm that provides this
hook, and update classes using the Storm base class directly to use
LPStorm.
The cache can become stale by:
1) Raw SQL, at psql command line or executed by appservers or scripts.
2) Stored procedures
3) Store.remove()
4) Using Storm for bulk updates, inserts or deletions.
5) Scripts or appservers running with non-identical [memcache] config section.
6) Scripts or appservers unable to contact memcached servers
7) Network glitches
We should probably ignore the last three. Are the first four a problem
though for the webservice? If it doesn't really matter, fine.
Otherwise we might need some ON DELETE and ON UPDATE database triggers
to keep things in sync. Hopefully this is unnecessary, as it could
will kill our performance gain.
=== added file 'lib/canonical/
+The cache starts out empty.
+
+ >>> print cache.get_
+ None
+
+Retrieving a representation of an object populates the cache.
+
+ >>> ignore = webservice.
+
+ >>> cache.get_
+ '{...}'
Is webservice.get making a real connection to the webservice here? I
am curious if the cache is populated when the object is retrieved,
rather than when the transaction that retrieved the object commits.
=== added file 'lib/lp/
> +An object's cache key is derived from its Storm metadata: its database
> +table name and its primary key.
> + >>> cache_key = cache.key_for(
> + ... person, 'media/type', 'web-service-
> + >>> print person.id, cache_key
> + 29 Person(
Do we need the LPCONFIG here too? Or is it ok for edge & production to mix?
Hmm... perhaps IMemcacheClient should grow a real API and
automatically prepend the LPCONFIG - not sure if have a use case for
edge and production sharing data, but it is convenient to have them
share the same physical Memcached servers.
> +When a Launchpad object is modified, its JSON representations for
> +recognized web service versions are automatically removed from the
> +cache.
> +
> + >>> person.addressline1 = "New address"
> + >>> from canonical.
> + >>> syncUpdate(person)
> +
> + >>> print cache.get(person, json_type, "beta", default="missing")
> + missing
> +
> + >>> print cache.get(person, json_type, "1.0", default="missing")
> + missing
Should we document the cases where this doesn't happen, or is this
irrelevant to the webservice?
...
Stuart Bishop (stub) : | # |
Leonard Richardson (leonardr) wrote : | # |
> 1) Raw SQL, at psql command line or executed by appservers or scripts.
> 2) Stored procedures
> 3) Store.remove()
> 4) Using Storm for bulk updates, inserts or deletions.
> 5) Scripts or appservers running with non-identical [memcache] config section.
> 6) Scripts or appservers unable to contact memcached servers
> 7) Network glitches
>
> We should probably ignore the last three. Are the first four a problem
> though for the webservice? If it doesn't really matter, fine.
> Otherwise we might need some ON DELETE and ON UPDATE database triggers
> to keep things in sync. Hopefully this is unnecessary, as it could
> will kill our performance gain.
Certainly the first four are a problem. So far we have yet to come to
terms with performance improvements that involve serving stale data from
the web service (or allowing users to use the possibly stale data they
already have).
Can we put the invalidation code in the ON DELETE/ON UPDATE trigger,
rather than in the Storm trigger? Would it hurt performance to just move
the invalidation down a layer?
> + >>> ignore = webservice.
> +
> + >>> cache.get_
> + '{...}'
>
> Is webservice.get making a real connection to the webservice here? I
> am curious if the cache is populated when the object is retrieved,
> rather than when the transaction that retrieved the object commits.
An HTTP request is being constructed and through Python machinations it
is invoking the web service code. But nothing's going over the network.
The cache is populated by the web service code during the GET handling.
Cache population shouldn't have anything to do with the database.
I hope this answers your question.
> > + >>> cache_key = cache.key_for(
> > + ... person, 'media/type', 'web-service-
> > + >>> print person.id, cache_key
> > + 29 Person(
>
> Do we need the LPCONFIG here too? Or is it ok for edge & production to mix?
>
> Hmm... perhaps IMemcacheClient should grow a real API and
> automatically prepend the LPCONFIG - not sure if have a use case for
> edge and production sharing data, but it is convenient to have them
> share the same physical Memcached servers.
Aaaah, it's absolutely not OK for edge and production to mix data. The
JSON representations include http: links to one server or the other. I
can fix this in the branch by tacking the instance name on to the cache
key.
> Should we document the cases where this doesn't happen, or is this
> irrelevant to the webservice?
>
> store.execute(
> Person.
I'd say that's relevant, and another argument for putting the cache
invalidation in a database hook rather than an ORM hook.
>
> > === added file 'lib/lp/
>
> > +class MemcachedStormR
> > + """Caches lazr.restful representations of Storm objects in memcached."""
> > +
> > + def __init__(self):
> > + self.client = memcache_
>
> You should be using getUtility(
> end...
Stuart Bishop (stub) wrote : | # |
On Wed, Jun 2, 2010 at 7:28 PM, Leonard Richardson
<email address hidden> wrote:
>> 1) Raw SQL, at psql command line or executed by appservers or scripts.
>> 2) Stored procedures
>> 3) Store.remove()
>> 4) Using Storm for bulk updates, inserts or deletions.
>> 5) Scripts or appservers running with non-identical [memcache] config section.
>> 6) Scripts or appservers unable to contact memcached servers
>> 7) Network glitches
>>
>> We should probably ignore the last three. Are the first four a problem
>> though for the webservice? If it doesn't really matter, fine.
>> Otherwise we might need some ON DELETE and ON UPDATE database triggers
>> to keep things in sync. Hopefully this is unnecessary, as it could
>> will kill our performance gain.
>
> Certainly the first four are a problem. So far we have yet to come to
> terms with performance improvements that involve serving stale data from
> the web service (or allowing users to use the possibly stale data they
> already have).
>
> Can we put the invalidation code in the ON DELETE/ON UPDATE trigger,
> rather than in the Storm trigger? Would it hurt performance to just move
> the invalidation down a layer?
We can put the invalidation code in a trigger.
It will certainly hurt performance. How much, I'm not sure. It also
depends on how much effort we put into minimizing it. The stored
procedure will need to be invoked for every row updated or deleted.
The simplest approach would be a Python stored procedure for the
trigger that invalidates the cache in band. This would be huge
overhead, both the Python overhead and waiting for network round
trips.
A better approach would be for the trigger to notify another process
of the keys that need invalidating, and have that process do it
asynchronously. This would be less overhead than our existing Slony-I
replication triggers. There would be some lag.
An alternative, which I'd need to investigate, would be to piggy back
on the existing Slony-I replication information. Whenever a row is
updated or deleted, we generate events containing this information so
the changes can be applied on the subscriber database nodes. There
would be some lag here too, but no measurable database impact.
>> + >>> ignore = webservice.
>> +
>> + >>> cache.get_
>> + '{...}'
>>
>> Is webservice.get making a real connection to the webservice here? I
>> am curious if the cache is populated when the object is retrieved,
>> rather than when the transaction that retrieved the object commits.
>
> An HTTP request is being constructed and through Python machinations it
> is invoking the web service code. But nothing's going over the network.
>
> The cache is populated by the web service code during the GET handling.
> Cache population shouldn't have anything to do with the database.
>
> I hope this answers your question.
I'm wondering if it is possible that the cache gets populated, but the
webservice request fails, causing an invalid value to get stored. If
we don't populate the cache on updates then it should be fine.
>> > === added file 'lib/lp/
>>
>> > +class MemcachedStormR
Leonard Richardson (leonardr) wrote : | # |
> A better approach would be for the trigger to notify another process
> of the keys that need invalidating, and have that process do it
> asynchronously. This would be less overhead than our existing Slony-I
> replication triggers. There would be some lag.
>
> An alternative, which I'd need to investigate, would be to piggy back
> on the existing Slony-I replication information. Whenever a row is
> updated or deleted, we generate events containing this information so
> the changes can be applied on the subscriber database nodes. There
> would be some lag here too, but no measurable database impact.
I'm +1 on either of these ideas. The only downside is that the invalidation algorithm will become more complex over time, as I make lazr.restful cache more things and as we create more versions of the Launchpad web service.
Here's what I mean. Right now when User(29,) is invalidated we need to remove the following three cache keys:
User(29,
User(29,
User(29,
That's one key for every web service version. We need to have access to a list of the currently active versions, either managed separately or obtained from getUtility(
You also brought up the fact that all our Launchpad instances share memcached hardware. So we need to change Launchpad's key generation to include the instance name. And we need to change the invalidation code to have a list of all instance names, and to invalidate these nine keys:
User(29,
User(29,
User(29,
User(29,
User(29,
User(29,
User(29,
User(29,
User(29,
In the future we may change lazr.restful to cache the WADL and/or HTML representations as well as the JSON representation. In that case there could be up to 27 cache keys for User(29,).
This isn't unmanageable, but it would be much simpler if we could tell memcached to delete "User(29,).*"
> I'm wondering if it is possible that the cache gets populated, but the
> webservice request fails, causing an invalid value to get stored. If
> we don't populate the cache on updates then it should be fine.
I'm pretty sure the cache is populated right before the representation is returned to the client. If there's an error it will happen during the representation creation. If for some reason an error happens afterwards it doesn't change the fact that the representation is accurate. The cache is certainly not populated on updates. So I think we're OK.
Preview Diff
1 | === modified file 'lib/canonical/database/sqlbase.py' | |||
2 | --- lib/canonical/database/sqlbase.py 2010-03-24 17:37:26 +0000 | |||
3 | +++ lib/canonical/database/sqlbase.py 2010-06-01 20:53:29 +0000 | |||
4 | @@ -28,6 +28,8 @@ | |||
5 | 28 | from zope.interface import implements | 28 | from zope.interface import implements |
6 | 29 | from zope.security.proxy import removeSecurityProxy | 29 | from zope.security.proxy import removeSecurityProxy |
7 | 30 | 30 | ||
8 | 31 | from lazr.restful.interfaces import IRepresentationCache | ||
9 | 32 | |||
10 | 31 | from canonical.config import config, dbconfig | 33 | from canonical.config import config, dbconfig |
11 | 32 | from canonical.database.interfaces import ISQLBase | 34 | from canonical.database.interfaces import ISQLBase |
12 | 33 | 35 | ||
13 | @@ -246,6 +248,11 @@ | |||
14 | 246 | """Inverse of __eq__.""" | 248 | """Inverse of __eq__.""" |
15 | 247 | return not (self == other) | 249 | return not (self == other) |
16 | 248 | 250 | ||
17 | 251 | def __storm_flushed__(self): | ||
18 | 252 | """Invalidate the web service cache.""" | ||
19 | 253 | cache = getUtility(IRepresentationCache) | ||
20 | 254 | cache.delete(self) | ||
21 | 255 | |||
22 | 249 | alreadyInstalledMsg = ("A ZopelessTransactionManager with these settings is " | 256 | alreadyInstalledMsg = ("A ZopelessTransactionManager with these settings is " |
23 | 250 | "already installed. This is probably caused by calling initZopeless twice.") | 257 | "already installed. This is probably caused by calling initZopeless twice.") |
24 | 251 | 258 | ||
25 | 252 | 259 | ||
26 | === added file 'lib/canonical/launchpad/pagetests/webservice/cache.txt' | |||
27 | --- lib/canonical/launchpad/pagetests/webservice/cache.txt 1970-01-01 00:00:00 +0000 | |||
28 | +++ lib/canonical/launchpad/pagetests/webservice/cache.txt 2010-06-01 20:53:29 +0000 | |||
29 | @@ -0,0 +1,55 @@ | |||
30 | 1 | ************************ | ||
31 | 2 | The representation cache | ||
32 | 3 | ************************ | ||
33 | 4 | |||
34 | 5 | Launchpad stores JSON representations of objects in a memcached | ||
35 | 6 | cache. The full cache functionality is tested in lazr.restful and in | ||
36 | 7 | lib/lp/services/memcache/doc/restful-cache.txt. This is just a simple | ||
37 | 8 | integration test. | ||
38 | 9 | |||
39 | 10 | First we need to get a reference to the cache object, so we can look | ||
40 | 11 | inside. | ||
41 | 12 | |||
42 | 13 | >>> from zope.component import getUtility | ||
43 | 14 | >>> from lazr.restful.interfaces import IRepresentationCache | ||
44 | 15 | >>> cache = getUtility(IRepresentationCache) | ||
45 | 16 | |||
46 | 17 | Since the cache is keyed by the underlying database object, we also | ||
47 | 18 | need one of those objects. | ||
48 | 19 | |||
49 | 20 | >>> from lp.registry.interfaces.person import IPersonSet | ||
50 | 21 | >>> login(ANONYMOUS) | ||
51 | 22 | >>> person = getUtility(IPersonSet).getByName('salgado') | ||
52 | 23 | >>> key = cache.key_for(person, 'application/json', 'devel') | ||
53 | 24 | >>> logout() | ||
54 | 25 | |||
55 | 26 | The cache starts out empty. | ||
56 | 27 | |||
57 | 28 | >>> print cache.get_by_key(key) | ||
58 | 29 | None | ||
59 | 30 | |||
60 | 31 | Retrieving a representation of an object populates the cache. | ||
61 | 32 | |||
62 | 33 | >>> ignore = webservice.get("/~salgado", api_version="devel").jsonBody() | ||
63 | 34 | |||
64 | 35 | >>> cache.get_by_key(key) | ||
65 | 36 | '{...}' | ||
66 | 37 | |||
67 | 38 | Once the cache is populated with a representation, the cached | ||
68 | 39 | representation is used in preference to generating a new | ||
69 | 40 | representation of that object. We can verify this by putting a fake | ||
70 | 41 | value into the cache and retrieving a representation of the | ||
71 | 42 | corresponding object. | ||
72 | 43 | |||
73 | 44 | >>> import simplejson | ||
74 | 45 | >>> cache.set_by_key(key, simplejson.dumps("Fake representation")) | ||
75 | 46 | |||
76 | 47 | >>> print webservice.get("/~salgado", api_version="devel").jsonBody() | ||
77 | 48 | Fake representation | ||
78 | 49 | |||
79 | 50 | Cleanup. | ||
80 | 51 | |||
81 | 52 | >>> cache.delete(person) | ||
82 | 53 | |||
83 | 54 | >>> webservice.get("/~salgado", api_version="devel").jsonBody() | ||
84 | 55 | {...} | ||
85 | 0 | 56 | ||
86 | === modified file 'lib/canonical/launchpad/zcml/webservice.zcml' | |||
87 | --- lib/canonical/launchpad/zcml/webservice.zcml 2010-03-26 19:11:50 +0000 | |||
88 | +++ lib/canonical/launchpad/zcml/webservice.zcml 2010-06-01 20:53:29 +0000 | |||
89 | @@ -16,6 +16,11 @@ | |||
90 | 16 | provides="lazr.restful.interfaces.IWebServiceConfiguration"> | 16 | provides="lazr.restful.interfaces.IWebServiceConfiguration"> |
91 | 17 | </utility> | 17 | </utility> |
92 | 18 | 18 | ||
93 | 19 | <utility | ||
94 | 20 | factory="lp.services.memcache.restful.MemcachedStormRepresentationCache" | ||
95 | 21 | provides="lazr.restful.interfaces.IRepresentationCache"> | ||
96 | 22 | </utility> | ||
97 | 23 | |||
98 | 19 | <securedutility | 24 | <securedutility |
99 | 20 | class="canonical.launchpad.systemhomes.WebServiceApplication" | 25 | class="canonical.launchpad.systemhomes.WebServiceApplication" |
100 | 21 | provides="canonical.launchpad.interfaces.IWebServiceApplication"> | 26 | provides="canonical.launchpad.interfaces.IWebServiceApplication"> |
101 | 22 | 27 | ||
102 | === modified file 'lib/lp/services/memcache/client.py' | |||
103 | --- lib/lp/services/memcache/client.py 2009-09-16 12:47:23 +0000 | |||
104 | +++ lib/lp/services/memcache/client.py 2010-06-01 20:53:29 +0000 | |||
105 | @@ -4,7 +4,9 @@ | |||
106 | 4 | """Launchpad Memcache client.""" | 4 | """Launchpad Memcache client.""" |
107 | 5 | 5 | ||
108 | 6 | __metaclass__ = type | 6 | __metaclass__ = type |
110 | 7 | __all__ = [] | 7 | __all__ = [ |
111 | 8 | 'memcache_client_factory' | ||
112 | 9 | ] | ||
113 | 8 | 10 | ||
114 | 9 | import memcache | 11 | import memcache |
115 | 10 | import re | 12 | import re |
116 | 11 | 13 | ||
117 | === added file 'lib/lp/services/memcache/doc/restful-cache.txt' | |||
118 | --- lib/lp/services/memcache/doc/restful-cache.txt 1970-01-01 00:00:00 +0000 | |||
119 | +++ lib/lp/services/memcache/doc/restful-cache.txt 2010-06-01 20:53:29 +0000 | |||
120 | @@ -0,0 +1,102 @@ | |||
121 | 1 | *************************************** | ||
122 | 2 | The Storm/memcached representation cache | ||
123 | 3 | **************************************** | ||
124 | 4 | |||
125 | 5 | The web service library lazr.restful will store the representations it | ||
126 | 6 | generates in a cache, if a suitable cache implementation is | ||
127 | 7 | provided. We implement a cache that stores representations of Storm | ||
128 | 8 | objects in memcached. | ||
129 | 9 | |||
130 | 10 | >>> login('foo.bar@canonical.com') | ||
131 | 11 | |||
132 | 12 | >>> from lp.services.memcache.restful import ( | ||
133 | 13 | ... MemcachedStormRepresentationCache) | ||
134 | 14 | >>> cache = MemcachedStormRepresentationCache() | ||
135 | 15 | |||
136 | 16 | An object's cache key is derived from its Storm metadata: its database | ||
137 | 17 | table name and its primary key. | ||
138 | 18 | |||
139 | 19 | >>> from zope.component import getUtility | ||
140 | 20 | >>> from lp.registry.interfaces.person import IPersonSet | ||
141 | 21 | >>> person = getUtility(IPersonSet).getByName('salgado') | ||
142 | 22 | |||
143 | 23 | >>> cache_key = cache.key_for( | ||
144 | 24 | ... person, 'media/type', 'web-service-version') | ||
145 | 25 | >>> print person.id, cache_key | ||
146 | 26 | 29 Person(29,),media/type,web-service-version | ||
147 | 27 | |||
148 | 28 | >>> from operator import attrgetter | ||
149 | 29 | >>> languages = sorted(person.languages, key=attrgetter('englishname')) | ||
150 | 30 | >>> for language in languages: | ||
151 | 31 | ... cache_key = cache.key_for( | ||
152 | 32 | ... language, 'media/type', 'web-service-version') | ||
153 | 33 | ... print language.id, cache_key | ||
154 | 34 | 119 Language(119,),media/type,web-service-version | ||
155 | 35 | 521 Language(521,),media/type,web-service-version | ||
156 | 36 | |||
157 | 37 | The cache starts out empty. | ||
158 | 38 | |||
159 | 39 | >>> json_type = 'application/json' | ||
160 | 40 | |||
161 | 41 | >>> print cache.get(person, json_type, "v1", default="missing") | ||
162 | 42 | missing | ||
163 | 43 | |||
164 | 44 | Add a representation to the cache, and you can retrieve it later. | ||
165 | 45 | |||
166 | 46 | >>> cache.set(person, json_type, "beta", | ||
167 | 47 | ... "This is a representation for version beta.") | ||
168 | 48 | |||
169 | 49 | >>> print cache.get(person, json_type, "beta") | ||
170 | 50 | This is a representation for version beta. | ||
171 | 51 | |||
172 | 52 | A single object can cache different representations for different | ||
173 | 53 | web service versions. | ||
174 | 54 | |||
175 | 55 | >>> cache.set(person, json_type, '1.0', | ||
176 | 56 | ... 'This is a different representation for version 1.0.') | ||
177 | 57 | |||
178 | 58 | >>> print cache.get(person, json_type, "1.0") | ||
179 | 59 | This is a different representation for version 1.0. | ||
180 | 60 | |||
181 | 61 | The web service version doesn't have to actually be defined in the | ||
182 | 62 | configuration. (But you shouldn't use this--see below!) | ||
183 | 63 | |||
184 | 64 | >>> cache.set(person, json_type, 'no-such-version', | ||
185 | 65 | ... 'This is a representation for a nonexistent version.') | ||
186 | 66 | |||
187 | 67 | >>> print cache.get(person, json_type, "no-such-version") | ||
188 | 68 | This is a representation for a nonexistent version. | ||
189 | 69 | |||
190 | 70 | A single object can also cache different representations for different | ||
191 | 71 | media types, not just application/json. (But you shouldn't use | ||
192 | 72 | this--see below!) | ||
193 | 73 | |||
194 | 74 | >>> cache.set(person, 'media/type', '1.0', | ||
195 | 75 | ... 'This is a representation for a strange media type.') | ||
196 | 76 | |||
197 | 77 | >>> print cache.get(person, "media/type", "1.0") | ||
198 | 78 | This is a representation for a strange media type. | ||
199 | 79 | |||
200 | 80 | When a Launchpad object is modified, its JSON representations for | ||
201 | 81 | recognized web service versions are automatically removed from the | ||
202 | 82 | cache. | ||
203 | 83 | |||
204 | 84 | >>> person.addressline1 = "New address" | ||
205 | 85 | >>> from canonical.launchpad.ftests import syncUpdate | ||
206 | 86 | >>> syncUpdate(person) | ||
207 | 87 | |||
208 | 88 | >>> print cache.get(person, json_type, "beta", default="missing") | ||
209 | 89 | missing | ||
210 | 90 | |||
211 | 91 | >>> print cache.get(person, json_type, "1.0", default="missing") | ||
212 | 92 | missing | ||
213 | 93 | |||
214 | 94 | But non-JSON representations, and representations for unrecognized web | ||
215 | 95 | service versions, are _not_ removed from the cache. (This is why you | ||
216 | 96 | shouldn't put such representations into the cache.) | ||
217 | 97 | |||
218 | 98 | >>> print cache.get(person, json_type, "no-such-version") | ||
219 | 99 | This is a representation for a nonexistent version. | ||
220 | 100 | |||
221 | 101 | >>> print cache.get(person, "media/type", "1.0") | ||
222 | 102 | This is a representation for a strange media type. | ||
223 | 0 | 103 | ||
224 | === added file 'lib/lp/services/memcache/restful.py' | |||
225 | --- lib/lp/services/memcache/restful.py 1970-01-01 00:00:00 +0000 | |||
226 | +++ lib/lp/services/memcache/restful.py 2010-06-01 20:53:29 +0000 | |||
227 | @@ -0,0 +1,39 @@ | |||
228 | 1 | # Copyright 2010 Canonical Ltd. This software is licensed under the | ||
229 | 2 | # GNU Affero General Public License version 3 (see the file LICENSE). | ||
230 | 3 | |||
231 | 4 | """Storm/memcached implementation of lazr.restful's representation cache.""" | ||
232 | 5 | |||
233 | 6 | import storm | ||
234 | 7 | |||
235 | 8 | from lp.services.memcache.client import memcache_client_factory | ||
236 | 9 | from lazr.restful.simple import BaseRepresentationCache | ||
237 | 10 | |||
238 | 11 | __metaclass__ = type | ||
239 | 12 | __all__ = [ | ||
240 | 13 | 'MemcachedStormRepresentationCache', | ||
241 | 14 | ] | ||
242 | 15 | |||
243 | 16 | |||
244 | 17 | class MemcachedStormRepresentationCache(BaseRepresentationCache): | ||
245 | 18 | """Caches lazr.restful representations of Storm objects in memcached.""" | ||
246 | 19 | |||
247 | 20 | def __init__(self): | ||
248 | 21 | self.client = memcache_client_factory() | ||
249 | 22 | |||
250 | 23 | def key_for(self, obj, media_type, version): | ||
251 | 24 | storm_info = storm.info.get_obj_info(obj) | ||
252 | 25 | table_name = storm_info.cls_info.table | ||
253 | 26 | primary_key = tuple(var.get() for var in storm_info.primary_vars) | ||
254 | 27 | |||
255 | 28 | key = (table_name + repr(primary_key) | ||
256 | 29 | + ',' + media_type + ',' + str(version)) | ||
257 | 30 | return key | ||
258 | 31 | |||
259 | 32 | def get_by_key(self, key, default=None): | ||
260 | 33 | return self.client.get(key) or default | ||
261 | 34 | |||
262 | 35 | def set_by_key(self, key, value): | ||
263 | 36 | self.client.set(key, value) | ||
264 | 37 | |||
265 | 38 | def delete_by_key(self, key): | ||
266 | 39 | self.client.delete(key) | ||
267 | 0 | 40 | ||
268 | === modified file 'lib/lp/services/memcache/tests/test_doc.py' | |||
269 | --- lib/lp/services/memcache/tests/test_doc.py 2010-03-03 11:00:42 +0000 | |||
270 | +++ lib/lp/services/memcache/tests/test_doc.py 2010-06-01 20:53:29 +0000 | |||
271 | @@ -7,9 +7,7 @@ | |||
272 | 7 | 7 | ||
273 | 8 | import os.path | 8 | import os.path |
274 | 9 | from textwrap import dedent | 9 | from textwrap import dedent |
275 | 10 | import unittest | ||
276 | 11 | 10 | ||
277 | 12 | from zope.component import getUtility | ||
278 | 13 | import zope.pagetemplate.engine | 11 | import zope.pagetemplate.engine |
279 | 14 | from zope.pagetemplate.pagetemplate import PageTemplate | 12 | from zope.pagetemplate.pagetemplate import PageTemplate |
280 | 15 | from zope.publisher.browser import TestRequest | 13 | from zope.publisher.browser import TestRequest |
281 | @@ -17,9 +15,7 @@ | |||
282 | 17 | from canonical.launchpad.testing.systemdocs import ( | 15 | from canonical.launchpad.testing.systemdocs import ( |
283 | 18 | LayeredDocFileSuite, setUp, tearDown) | 16 | LayeredDocFileSuite, setUp, tearDown) |
284 | 19 | from canonical.testing.layers import LaunchpadFunctionalLayer, MemcachedLayer | 17 | from canonical.testing.layers import LaunchpadFunctionalLayer, MemcachedLayer |
285 | 20 | from lp.services.memcache.interfaces import IMemcacheClient | ||
286 | 21 | from lp.services.testing import build_test_suite | 18 | from lp.services.testing import build_test_suite |
287 | 22 | from lp.testing import TestCase | ||
288 | 23 | 19 | ||
289 | 24 | 20 | ||
290 | 25 | here = os.path.dirname(os.path.realpath(__file__)) | 21 | here = os.path.dirname(os.path.realpath(__file__)) |
291 | @@ -59,11 +55,15 @@ | |||
292 | 59 | test.globs['MemcachedLayer'] = MemcachedLayer | 55 | test.globs['MemcachedLayer'] = MemcachedLayer |
293 | 60 | 56 | ||
294 | 61 | 57 | ||
295 | 58 | def suite_for_doctest(filename): | ||
296 | 59 | return LayeredDocFileSuite( | ||
297 | 60 | '../doc/%s' % filename, | ||
298 | 61 | setUp=memcacheSetUp, tearDown=tearDown, | ||
299 | 62 | layer=LaunchpadFunctionalLayer) | ||
300 | 63 | |||
301 | 62 | special = { | 64 | special = { |
306 | 63 | 'tales-cache.txt': LayeredDocFileSuite( | 65 | 'tales-cache.txt': suite_for_doctest('tales-cache.txt'), |
307 | 64 | '../doc/tales-cache.txt', | 66 | 'restful-cache.txt': suite_for_doctest('restful-cache.txt'), |
304 | 65 | setUp=memcacheSetUp, tearDown=tearDown, | ||
305 | 66 | layer=LaunchpadFunctionalLayer), | ||
308 | 67 | } | 67 | } |
309 | 68 | 68 | ||
310 | 69 | 69 | ||
311 | 70 | 70 | ||
312 | === modified file 'versions.cfg' | |||
313 | --- versions.cfg 2010-05-17 20:03:02 +0000 | |||
314 | +++ versions.cfg 2010-06-01 20:53:29 +0000 | |||
315 | @@ -28,7 +28,7 @@ | |||
316 | 28 | lazr.delegates = 1.1.0 | 28 | lazr.delegates = 1.1.0 |
317 | 29 | lazr.enum = 1.1.2 | 29 | lazr.enum = 1.1.2 |
318 | 30 | lazr.lifecycle = 1.1 | 30 | lazr.lifecycle = 1.1 |
320 | 31 | lazr.restful = 0.9.26 | 31 | lazr.restful = 0.9.27 |
321 | 32 | lazr.restfulclient = 0.9.14 | 32 | lazr.restfulclient = 0.9.14 |
322 | 33 | lazr.smtptest = 1.1 | 33 | lazr.smtptest = 1.1 |
323 | 34 | lazr.testing = 0.1.1 | 34 | lazr.testing = 0.1.1 |
Hi Leonard,
Thanks for this branch -- it looks very promising. A 4x increase would be great.
It looks like you haven't pushed your changes to the download cache up yet so lazr.restful 0.9.27 isn't available. Be sure to do that.
You define but don't use 'json_type' in webservice/ cache.txt.
The __all__ in memcache/client.py should be on multiple lines.
It looks like there are some valid complaints from 'make lint'. Please run it and clean up the ones that are real.