Merge lp:~mkanat/loggerhead/synchronize-lru_cache into lp:loggerhead

Proposed by Max Kanat-Alexander
Status: Merged
Merged at revision: 418
Proposed branch: lp:~mkanat/loggerhead/synchronize-lru_cache
Merge into: lp:loggerhead
Diff against target: 51 lines (+7/-7)
1 file modified
loggerhead/history.py (+7/-7)
To merge this branch: bzr merge lp:~mkanat/loggerhead/synchronize-lru_cache
Reviewer Review Type Date Requested Status
Matt Nordhoff Approve
Loggerhead Team Pending
Max Kanat-Alexander Pending
Review via email: mp+24651@code.launchpad.net

This proposal supersedes a proposal from 2010-04-23.

Description of the change

Fixes Bug #420738 by synchronizing all our access to lru_cache, which is not thread-safe.

In testing, I don't see any significant performance degradation under access from many different users accessing several different branches.

To post a comment you must log in.
Revision history for this message
Ian Clatworthy (ian-clatworthy) wrote : Posted in a previous version of this proposal

Max,

Well done on tracking down the problem and thanks for the patch. Aren't threading problems fun? :-)

My main concern with the proposed change is that I'd prefer the Cache object to have the intelligence to Do The Right Thing wrt locking, rather than it being left up to the client to guard access to it. As it stands, we're deciding to use an LRU cache in apps/branch.py and paying the consequences through increased code complexity several layers deeper. What do you think about having a decorator class that provides the synchronized access? E.g. changing the cache creation line to be something like ...

  graph_cache = ThreadSafeCache(bzrlib.lru_cache.LRUCache(10))

and introducing ThreadSafeCache() as the layer with the RLock?

OTOH, this bug is critical and perhaps we ought to just land it as is, particularly as John is looking to rework the caching architecture Real Soon Now anyhow.

Any second opinions?

Revision history for this message
Max Kanat-Alexander (mkanat) wrote : Posted in a previous version of this proposal

> Well done on tracking down the problem and thanks for the patch. Aren't
> threading problems fun? :-)

  Fun indeed. :-)

> What do you think
> about having a decorator class that provides the synchronized access? E.g.
> changing the cache creation line to be something like ...
>
> graph_cache = ThreadSafeCache(bzrlib.lru_cache.LRUCache(10))
>
> and introducing ThreadSafeCache() as the layer with the RLock?

  I thought about that, and the problem is that it would depend to some degree on implementation details of lru_cache--if the client started using it in different ways, or if the lru_cache implementation changed in some ways, we'd have to start adding locks in different places.

  John has already said that he's not going to make the LRUCache itself thread-safe (see the bug), and the only accesses we make into to it are via this interface, so I think the simplest and most forward-compatible solution is to put the locking in our client code.

Revision history for this message
Matt Nordhoff (mnordhoff) wrote : Posted in a previous version of this proposal

Sounds fine to me, and I've been running it for a few days without problems.

review: Approve
Revision history for this message
John A Meinel (jameinel) wrote : Posted in a previous version of this proposal

I should comment here a little bit. I'm a bit surprised that this actually handles what we want it to.

Specifically, RevInfoMemoryCache is created for each History object created, and a new one is created for each HTTP request (if I sorted out my info correctly.)

As such, *each* HTTP thread ends up with its own lock around the cache. And you don't end up doing much of any cross-thread protection, which was the whole point.

I could probably be convinced to add a potential locking primitive to LRUCache. I wanted to avoid the overhead, but we can just default to None, and allow callers to set a Lock object that we'll respect. Something that says "if self._lock is not None: self._lock.acquire()".

I think that __getitem__ can be protected after the "if node is mru:" check. If two threads are accessing different items concurrently, there is no clear-cut MRU, and at worst the one that thinks it is already 'most recent' will just be 1 step behind.

If this *does* seem to fix the problem, my guess is that it only does so by playing tricks with threads and the GIL, and causing sync points. (aka, by accident)

Revision history for this message
John A Meinel (jameinel) wrote : Posted in a previous version of this proposal

Check out: lp:///~jameinel/bzr/lru_locking

It still needs tests, and possibly 'live-fire' testing, hammering the cache structure with lots of threads. However, I'm fairly sure that I've gotten a minimal locked section (we lock as little as possible), while still maintaining correctness.

LRUCache.add() is not specifically thread-safe if you have cleanups and are adding different values for the same key. We might want to consider putting a big lock around all of add() for that case, but callers could also do that...

The main reason I poked at it was because you can't get 'fine-grained' locking around the LRU double-linked-list unless it is implemented in the cache itself.

Revision history for this message
Max Kanat-Alexander (mkanat) wrote : Posted in a previous version of this proposal

Okay, John is right--the bug is not fixed. I just ran into it on bzr.mozilla.org again.

For loggerhead, a quick workaround is to make the lock global instead of a per-instance item.

We do have cleanups, and I suppose it's possible for us to be updating an existing branch with a new cache, right? So I think we'll need this lock in loggerhead anyway.

review: Needs Fixing
Revision history for this message
Matt Nordhoff (mnordhoff) wrote :

Fine by me, and I've been running it a couple days without issues. But I don't particularly know if it's right -- I approved the original version of this, too. :-\ Still...how could this one not be right?

review: Approve
Revision history for this message
John A Meinel (jameinel) wrote :

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Matt Nordhoff wrote:
> You have been requested to review the proposed merge of lp:~mkanat/loggerhead/synchronize-lru_cache into lp:loggerhead.
>
> Fixes Bug #420738 by synchronizing all our access to lru_cache, which is not thread-safe.
>
> In testing, I don't see any significant performance degradation under access from many different users accessing several different branches.
>
>

Seems fine to me. Note that the bzr-history-db stuff would supersede
this work.

John
=:->

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (Cygwin)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAkvi/esACgkQJdeBCYSNAAMzmACeJ2WU5tvYjXfZBAxtjlOJ5WcC
7YYAnAha0bJjXErD3aecrciqLsCouasv
=B/h8
-----END PGP SIGNATURE-----

Revision history for this message
Max Kanat-Alexander (mkanat) wrote :

> Seems fine to me. Note that the bzr-history-db stuff would supersede
> this work.

  Only for situations where bzr-history-db was installed though, right?

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'loggerhead/history.py'
2--- loggerhead/history.py 2010-04-27 15:45:52 +0000
3+++ loggerhead/history.py 2010-05-04 12:59:22 +0000
4@@ -152,6 +152,9 @@
5 filename=rich_filename(paths[1], kind),
6 file_id=file_id))
7
8+# The lru_cache is not thread-safe, so we need a lock around it for
9+# all threads.
10+rev_info_memory_cache_lock = threading.RLock()
11
12 class RevInfoMemoryCache(object):
13 """A store that validates values against the revids they were stored with.
14@@ -168,9 +171,6 @@
15
16 def __init__(self, cache):
17 self._cache = cache
18- # lru_cache is not thread-safe, so we need to lock all accesses.
19- # It is even modified when doing a get() on it.
20- self._lock = threading.RLock()
21
22 def get(self, key, revid):
23 """Return the data associated with `key`, subject to a revid check.
24@@ -178,11 +178,11 @@
25 If a value was stored under `key`, with the same revid, return it.
26 Otherwise return None.
27 """
28- self._lock.acquire()
29+ rev_info_memory_cache_lock.acquire()
30 try:
31 cached = self._cache.get(key)
32 finally:
33- self._lock.release()
34+ rev_info_memory_cache_lock.release()
35 if cached is None:
36 return None
37 stored_revid, data = cached
38@@ -194,11 +194,11 @@
39 def set(self, key, revid, data):
40 """Store `data` under `key`, to be checked against `revid` on get().
41 """
42- self._lock.acquire()
43+ rev_info_memory_cache_lock.acquire()
44 try:
45 self._cache[key] = (revid, data)
46 finally:
47- self._lock.release()
48+ rev_info_memory_cache_lock.release()
49
50 # Used to store locks that prevent multiple threads from building a
51 # revision graph for the same branch at the same time, because that can

Subscribers

People subscribed via source and target branches