Merge lp:~kamstrup/zeitgeist/notification into lp:zeitgeist/0.1
- notification
- Merge into 0.8-python
Status: | Merged | ||||
---|---|---|---|---|---|
Merged at revision: | not available | ||||
Proposed branch: | lp:~kamstrup/zeitgeist/notification | ||||
Merge into: | lp:zeitgeist/0.1 | ||||
Diff against target: |
1018 lines (+760/-29) 9 files modified
_zeitgeist/engine/Makefile.am (+1/-0) _zeitgeist/engine/notify.py (+261/-0) _zeitgeist/engine/remote.py (+49/-19) _zeitgeist/engine/resonance_engine.py (+10/-0) doc/dbus/source/index.rst (+6/-0) test/datamodel-test.py (+44/-2) test/remote-test.py (+64/-0) zeitgeist/client.py (+225/-4) zeitgeist/datamodel.py (+100/-4) |
||||
To merge this branch: | bzr merge lp:~kamstrup/zeitgeist/notification | ||||
Related bugs: |
|
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Mikkel Kamstrup Erlandsen | Approve | ||
Markus Korn | Needs Fixing | ||
Markus Korn | thorough | Pending | |
Review via email: mp+15583@code.launchpad.net |
Commit message
Description of the change
Mikkel Kamstrup Erlandsen (kamstrup) wrote : | # |
Mikkel Kamstrup Erlandsen (kamstrup) wrote : | # |
Uh, sorry. The InstallMonitor method looks like:
InstallMonitor(in o client_
ie. templates in plural
Seif Lotfy (seif) wrote : | # |
WHOA if i got it right THIS ROCKS!
so basically the client exposes a DBus method to get notified by the engine
right? So the engine knows can target specific clients right? IF YES
I bow to your mighty implementation
2009/12/2 Mikkel Kamstrup Erlandsen <email address hidden>
> Mikkel Kamstrup Erlandsen has proposed merging
> lp:~kamstrup/zeitgeist/notification into lp:zeitgeist.
>
> Requested reviews:
> Zeitgeist-Engine (zeitgeist): thorough
> Related bugs:
> #488967 Add event notification and subscription system
> https:/
>
>
> The notification/
> more formal review now.
>
> What I do is add two new methods to the org.gnome.
>
> InstallMonitor(in o client_
> RemoveMonitor(in o client_
>
> The client_monitor_path points to a client side object that exposes the
> DBus interface org.gnome.
>
> Notify(in aE events)
>
> I add some convenience methods in ZeitgeistClient to set up a monitor
> simply by calling: client.
> Easy peasy lemon squeezy for application developers I hope.
>
> TODO: There are two missing items - which doesn't block a merge though
> IMHO:
>
> - Unit test for RemoveMonitor and that monitors for crashing clients are
> cleaned up
> - What to do in case of duplicate events when we notify? Or what to do
> about insertion errors causing dropped events? Right now I hook in at the
> remote.py level and that makes it a nice clean cut, but I can not detect
> these cases... I can hook in at the resonance_engine.py level and look for
> these details, but the logic will become a lot more complex...
> --
> https:/
> You are subscribed to branch lp:zeitgeist.
>
> === modified file '_zeitgeist/
> --- _zeitgeist/
> +++ _zeitgeist/
> @@ -3,5 +3,6 @@
> app_PYTHON = \
> __init__.py \
> extension.py \
> + notify.py \
> resonance_engine.py \
> remote.py
>
> === added file '_zeitgeist/
> --- _zeitgeist/
> +++ _zeitgeist/
> @@ -0,0 +1,209 @@
> +# -.- coding: utf-8 -.-
> +
> +# Zeitgeist
> +#
> +# Copyright © 2009 Mikkel Kamstrup Erlandsen <email address hidden>
> +#
> +# This program is free software: you can redistribute it and/or modify
> +# it under the terms of the GNU Lesser General Public License as published
> by
> +# the Free Software Foundation, either version 3 of the License, or
> +# (at your option) any later version.
> +#
> +# This program is distributed in the hope that it will be useful,
> +# but WITHOUT ANY WARRANTY; without even the implied warranty of
> +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
> +# GNU Lesser General Public License for more det...
Mikkel Kamstrup Erlandsen (kamstrup) wrote : | # |
> WHOA if i got it right THIS ROCKS!
> so basically the client exposes a DBus method to get notified by the engine
> right? So the engine knows can target specific clients right? IF YES
> I bow to your mighty implementation
Spot on :-) I think you describe it way better than me :-)
Markus Korn (thekorn) wrote : | # |
Thanks Mikkel.
This is not a complete code review as I'm running out of time right now, but a start.
There are two bugs in notify.py
* in .install_monitor() it should be self._monitors[
* the debug statement in .remove_monitor() doe not look right, must be sth. like log.debug("Removing monitor %s %s", owner, monitor_path)
Notifications in general:
Monitor.Notify() should get an additional argument to indicate the kind of notification, like 'InsertedEvent', 'RemovedEvent', 'ChangedEvent'. So I propose a in_signature like
in_signature
ZeitgeistClient:
ZeitgeistClient
One minor thing, maybe unrelated to this diff and just out of interest: Why are you using *two* tabs for one indention level? - it is taking so much space
- 1205. By Mikkel Kamstrup Erlandsen
-
Add some clarifying comments in the install_monitor()
Fix debug statement in remove_monitor()
- 1206. By Mikkel Kamstrup Erlandsen
-
Add a monitor_
removed_ handler arg to remove_monitor()
Mikkel Kamstrup Erlandsen (kamstrup) wrote : | # |
> There are two bugs in notify.py
> * in .install_monitor() it should be self._monitors[
That's not necessary since hash(monitor) == monitor_key, by construction in _MonitorProxy. I added a comment about this to make the ideas more clear.
> * the debug statement in .remove_monitor() doe not look right, must be sth.
> like log.debug("Removing monitor %s %s", owner, monitor_path)
Fixed.
> Notifications in general:
> Monitor.Notify() should get an additional argument to indicate the kind of
> notification, like 'InsertedEvent', 'RemovedEvent', 'ChangedEvent'. So I
> propose a in_signature like
> in_signature=
Right. Either an additional arg, or a new callback method. The new-arg approach is more future proff since we can extend it. I think we should use an integer enumeration to indicate the notification type though. Like we do for StorageState and ResultType.
And btw. we can not have a ChangedEvent in the current API. (and I think it is a good idea to keep events immutable)
I will hopefully get around to adding the extra arg tonight.
> ZeitgeistClient:
> ZeitgeistClient
> reply_handler, so people can react on successful removals (one example is
> quitting the mainloop in a unittest :))
Fixed
> One minor thing, maybe unrelated to this diff and just out of interest: Why
> are you using *two* tabs for one indention level? - it is taking so much space
? Where am i doing that? It is not on purpose :-)
Markus Korn (thekorn) wrote : | # |
>> There are two bugs in notify.py
>> * in .install_monitor() it should be self._monitors[
>
> That's not necessary since hash(monitor) == monitor_key, by construction in _MonitorProxy. I added a comment about this to make the ideas more clear.
>
hmm, I don' t get it, and I'm pretty sure this is not working this
way. Why do you want a object->object mapping for self._monitor, I
think you you intend is to have a hash(object)
therefor it has to be self._monitors[
the self._monitor.pop() code in .remove_monitor() will fail as it is
right now, I could create a unittest for it in a bit ;)
>> * the debug statement in .remove_monitor() doe not look right, must be sth.
>> like log.debug("Removing monitor %s %s", owner, monitor_path)
>
> Fixed.
>
>> Notifications in general:
>> Monitor.Notify() should get an additional argument to indicate the kind of
>> notification, like 'InsertedEvent', 'RemovedEvent', 'ChangedEvent'. So I
>> propose a in_signature like
>> in_signature=
>
> Right. Either an additional arg, or a new callback method. The new-arg approach is more future proff since we can extend it. I think we should use an integer enumeration to indicate the notification type though. Like we do for StorageState and ResultType.
>
> And btw. we can not have a ChangedEvent in the current API. (and I think it is a good idea to keep events immutable)
>
> I will hopefully get around to adding the extra arg tonight.
>
Right, int arg makes sense, and also events should of course stay
immutable, it was just a bad choosen example, without thinking about
it.
>> ZeitgeistClient:
>> ZeitgeistClient
>> reply_handler, so people can react on successful removals (one example is
>> quitting the mainloop in a unittest :))
>
> Fixed
>
>> One minor thing, maybe unrelated to this diff and just out of interest: Why
>> are you using *two* tabs for one indention level? - it is taking so much space
>
> ? Where am i doing that? It is not on purpose :-)
Sorry, my bad. I'm just not used to the look and feel of using tabs
for indention, and my editor has a tab-width of 8 spaces, which makes
the code hardyl readable on small screens.
- 1207. By Mikkel Kamstrup Erlandsen
-
Fix type in debug string
- 1208. By Mikkel Kamstrup Erlandsen
-
Fix signature of the dispath_handler() in client.
remove_ monitor( ) Also fix a warning statement
- 1209. By Mikkel Kamstrup Erlandsen
-
Fix insertion/removal of monitors in notify.py. The attached test case exposes a bug where we must use the monitor hash as key instead of simply the monitor itself
- 1210. By Mikkel Kamstrup Erlandsen
-
Change DBus API for the Monitor.Notify() method to take an uint32 as first argument.
This first arg is an enumeration: NotifyInsert=0 and NotifyDelete=1 designating
the type of notification.Still delete notifications remains to be handled. I wrote in the docs that we only
return the event ids in the event structs on NotifyDelete
Mikkel Kamstrup Erlandsen (kamstrup) wrote : | # |
Ok. I've implemented Notify() with the extra insert/delete flag argument in the branch now.
I think it's great that we notify on deletions, but there's something we have to consider. If we want to do template matching on all monitors in DeleteEvents() calls then we need to extract the full Event object for each event id passed to DeleteEvents(
Right now I wrote in the client docs that delete notifications only fill in the Event.id property, leaving the rest of the event template blank. This does not solve the problem completely though (it's still open how to do template matching fx.). We have a few options:
1) Always pass *all* event ids too all monitors on DeleteEvents. Since deletions should be infrequent this should not be a problem. Besides - the apps might have some views that needs updating if they are showing outdated events. These events might not be watched by a monitor.
2) Don't send any events with a delete notification at all. A delete notification in this scenario is simply a flag to the monitoring app that it needs to re-query the data for all active views.
Solution 1) can be quite messy if some app decides to delete 10.000 events. This means we'd send 10.000 full event structs to all monitors :-S
Other ideas?
Mikkel Kamstrup Erlandsen (kamstrup) wrote : | # |
Seif had an idea on IRC. With deletes we only send the events with the highest and the lowest timestamps from the deleted set. This should be fairly cheap to query on the engine and wont cause a lot of traffic either.
The events send for deletions should be empty templates only with their timestamps and ids set.
It'll require a bit of code, but I think i like the idea... Comments?
Siegfried Gevatter (rainct) wrote : | # |
2009/12/3 Mikkel Kamstrup Erlandsen <email address hidden>:
> With deletes we only send the events with the highest and the lowest timestamps from the deleted set.
If we go this way we have to do the query for the timestamps, send out
the timestamps, have the application check whether they are affected,
have the application request all items within the timerange again, do
a findevents query and return all events again.
I think we should just return a tuple with the IDs of the deleted
events (ie. just propagate the parameter with which DeleteEvents is
called), it's more simple and cleaner.
Doing this right would mean having NotifyInsertion() and
NotifyDeletion() methods but I don't see how this should be a problem.
We have to keep them unchanged in both cases (being multiple methods
or a single method with an integer for the type) if we don't want to
break the API, and adding a new method if we ever need a new type
shouldn't be a problem either (and gives us more freedom in case
special-casing it -like with NotifyDeletion only getting IDs- is
deemed convenient).
--
Siegfried-Angel Gevatter Pujals (RainCT)
Free Software Developer 363DEAE3
- 1211. By Mikkel Kamstrup Erlandsen
-
Comments and doc updates
Mikkel Kamstrup Erlandsen (kamstrup) wrote : | # |
I tend to agree with Siegfried. The catch 22 with two methods on the monitor API is that it'll also require that apps pass two callbacks to client.
Which incidently reminds me that these callbacks should probably have an optional user_data argument...
Anyway, I have a worry in the regarding NotifyDelete(). Since we always pass all event ids we can end up in some heavy-load situations. Especially on embedded devices. Consider:
* Almost all apps are connected to Zeitgeist an has a monitor; say 20 running apps
* To speed things up each app has a local cache of ~200 events
* Now we delete 1000 items around Christmas last year
* We send DeleteNotify with 1000 unit32 to 20 apps
* Each of the 20 apps iterate through all the 1000 ids and remove them from their cache
* One or more apps might do a repaint in order to reflect the changes in their model
Now consider that all the apps will most likely start doing all this in the same millisecond... This will likely frind the entire UI to a halt in a second or two. We can lighten the load a bit by *also* including the first and last timestamp of the affected events.
With the timerange apps can do easy validation of their caches. We still have the same worst case as before, but for cases (like the example above) most apps wont have to invalidate their caches at all, and can check that simply by also keeping a timerange for their cache.
For consistensy we should also pass the timerange in NotifyInsert.
So to make it clear. The Monitior interface has two methods:
NotifyInsert(in (xx) time_range, in aE events)
NotifyDelete(in (xx) time_range, in au event_ids)
We can also consider adding a time range arg to InstallMonitor, and make the signature like so:
InstallMonitor(in o client_
If we can agree on this, then I'll implement it and merge it tonight.
Mikkel Kamstrup Erlandsen (kamstrup) wrote : | # |
Hmmm... Maybe I was too quick about the time_range idea on InstallMonitor()... Most apps will probably be interested in a time_range that expands as time progresses so that it also gets notified of new events - that's sorta the point with notifications :-)
Maybe if the end of the timerange is <= 0 it means "indefinite"...
Markus Korn (thekorn) wrote : | # |
I don't get the point why deleting of events is depending on a time
range at all.
Let's say all event entries with manifestation =
Manifestation.
first and last deleted events makes no sense for client to clean up
his 'local' cache, because there a probably still events with other
manifestations in the DB.
So the big problem we are facing here is: How can we avoid sending big
batches of data over DBus for *many* clients.
For me there are two solutions (I must admit the first one came in my
mind in a kind of 'lack of coffee'-state, and I like the second
solution most, but for the sake of completeness, I'm writing down
both):
* maintain a JSON dump of deleted/inserted events as a kind of
"action-log" on the local filesystem, and send the filename as
argument of Notify(). It is now up to the clients to parse the dump
and use the information (BAD idea, see disclaimer above)
* notifications are a async action, so we per design do not guarantee
instantaneous notifications per design. We should maintain a
notification-queue for each Monitor. Let's change the signature of
Notify to
Notify(in u SIGNAL_TYPE, in aE EVENTS, in u PENDING_
Notify() will always only send a default number of N (let's say 10)
notifications, all other notifications are put into the
notification-queue for this Monitor. PENDING_
the length of the queue for this monitor.
We should then add a
GetNotifica
method to the org.gnome.
a Notify() run for this monitor.
The EVENT_TEMPLATE argument allows to pick up specific items of the
notification-queue, and SIGNAL _TYPE is like SIGNAL_TYPE in Notify()
but can also be a combination of signal (not sure how to do this, but
this is impl. detail)
Items in the notification queue should only stay there for a certain
time, so the queue will clean-up itself.
So at the end, each client is responsible to get as much notifications
as needed, there might be some clients which are only interested to
know *if* and *how* (delete/insert) the DB changed, for them getting
only 10 notifications is perfectly fine. And other clients will use
GetNotifications at some point in time to 'ask' for changes.
Seif Lotfy (seif) wrote : | # |
I honestly think sending the 2 timestamps of the smallest and largest
timestamps of the "notification" for deleted items makes sense to me since
it does cover both solution u just proposed.
It only gives out a timeperiod in which something changed, allowing the
clients to requery the engine for this timeperiod, gettting a new set of
events. This alone is for me better than the first proposal. Now while ur
second solution is more appealing to me, in which way is it less traffic
consumable than my proposal. We will still request the PENDING_
which for me is the same as queriyng the events between 2 timestamps.
A solution imho would be using the 2 timstamps in combination with the
pending notifications. making sure we first encapuslate the timeframe that
is targeted by the notifcation then allowing to apps to either requery or
ask for pending notifications.
Did any1 get that?
2009/12/4 Markus Korn <email address hidden>
> I don't get the point why deleting of events is depending on a time
> range at all.
> Let's say all event entries with manifestation =
> Manifestation.
> first and last deleted events makes no sense for client to clean up
> his 'local' cache, because there a probably still events with other
> manifestations in the DB.
>
> So the big problem we are facing here is: How can we avoid sending big
> batches of data over DBus for *many* clients.
>
> For me there are two solutions (I must admit the first one came in my
> mind in a kind of 'lack of coffee'-state, and I like the second
> solution most, but for the sake of completeness, I'm writing down
> both):
>
> * maintain a JSON dump of deleted/inserted events as a kind of
> "action-log" on the local filesystem, and send the filename as
> argument of Notify(). It is now up to the clients to parse the dump
> and use the information (BAD idea, see disclaimer above)
>
> * notifications are a async action, so we per design do not guarantee
> instantaneous notifications per design. We should maintain a
> notification-queue for each Monitor. Let's change the signature of
> Notify to
> Notify(in u SIGNAL_TYPE, in aE EVENTS, in u PENDING_
> Notify() will always only send a default number of N (let's say 10)
> notifications, all other notifications are put into the
> notification-queue for this Monitor. PENDING_
> the length of the queue for this monitor.
> We should then add a
> GetNotifications(aE EVENT_TEMPLATE, u SIGNAL_TYPE)
> method to the org.gnome.
> a Notify() run for this monitor.
> The EVENT_TEMPLATE argument allows to pick up specific items of the
> notification-queue, and SIGNAL _TYPE is like SIGNAL_TYPE in Notify()
> but can also be a combination of signal (not sure how to do this, but
> this is impl. detail)
> Items in the notification queue should only stay there for a certain
> time, so the queue will clean-up itself.
> So at the end, each client is responsible to get as much notifications
> as needed, there might be some clients which are only interested to
> know *if* and *how* (delete/insert) the DB changed, for them get...
Mikkel Kamstrup Erlandsen (kamstrup) wrote : | # |
> * maintain a JSON dump of deleted/inserted events ...
> SNIP
> * notifications are a async action, so we per design do not guarantee
> instantaneous notifications per design. We should maintain a
> notification-queue for each Monitor....
Both your proposals require that we keep state on the server side. I think that is a bad idea. It is just begging for leaks and memory balloning. Unless we go to the full extend and keep monitor logs in SQLite tmp tables and what not. It will increase complexity considerably.
Also I dislike you second proposal because it is simply a LOT more complicated than what we have now, both API wise and impl. wise for the server. And it will result in a lot of DBus calls if there are many clients.
But basically I am against any kind of solution that requires us to keep any kind of state around. From a Message Passing design perspective it is also a no-no.
Mikkel Kamstrup Erlandsen (kamstrup) wrote : | # |
I've thought a lot about this... And I think I have my final stand point. Its based on a use case analysis. What are our normal use cases for deletions?
1) Delete all events of some happened the last half our. This may be around 100 events at tops. This is the pr0n- and present shopping case
2) Delete events regarding files from three days in the last week. This is the "OMG it is embarrasing that ILuvHelloKitty.txt keeps popping up"-case. This may be 1000 events if you are *very* busy
3) Delete all I did in December last year. This is the "I was planning a murder"-case. Busy planning could generate 300 related events/day for one month. Ie. approx. 10,000 events.
Note that we don't have a "OMG delete all I did the last month" because we hope that blacklisting and privacy mode makes us clean in the short term. Also day-to-day deletions will help with the short term lookbacks.
CLAIM 1: Of the three cases, 1) is without doubt the most frequent one.
CLAIM 2: Apps will most commonly be showing events for the near term past. Fx.1 week back in time.
Based on this analysis I think we should:
ACT 1: Define time range to monitor in InstallMonitor. To watch ahead in the future simply use (now, MAX_INT64) as time range
ACT 2: Pass time range and all deleted event ids in: NotifyDelete(in (xx) time_range, in au event_ids). If the monitor's time range is completely out of the deletions' time range we don't notify it.
ACT 3: For coherency also pass time range in: NotifyInsert(in (xx) time_range, in aE events) - even though the time range is technically redundant here, it is mighty convenient to have.
In order to progress on this matter I think I will put my foot down here. - Unless someone can clearly explain why this approach would suck. Let the flames begin :-)
- 1212. By Mikkel Kamstrup Erlandsen
-
Implement some convenience methods regarding time ranges:
TimeRange.
intersect( time_range) - calculates the intersection TimeRange of self and time_range Event.in_
time_range( time_range) - returns True if event.timestamp lies in time_range - 1213. By Mikkel Kamstrup Erlandsen
-
Whopping commit updating the Monitor/Notify framework as follows:
org.gnome.
zeitgeist. Monitor now has two methods: NotifyInsert(in (xx) time_range, in aE events)
NotifyDelete(in (xx) time_range, in au event_ids)The client.
install_ monitor( ) method needed to be adapted to take two callbacks instad
The NotifyDelete call always receives _all_ deleted event ids. We can however at times
drop the notification all together if the Monitor time range is outside of the time range
of the deleted events.The core notification system, notify.py needed update to this new DBus signature
I also changed delete_events() in resonance_engine.py to return the max and min
timestamps for the affectedd events.Add some convenience constructors to TimeRange. Namely .always() and .from_now().
- 1214. By Mikkel Kamstrup Erlandsen
-
Allow an empty set of Monitor templates to match anything
Mikkel Kamstrup Erlandsen (kamstrup) wrote : | # |
I pushed the described impl. to my branch
Markus Korn (thekorn) wrote : | # |
No need for flames ;)
But there is one question left: how expensive (performance wise) are Monitors?
Lets say I have a journal-view-like application like the activity
journal or the filesystem, how should I use Monitors?
Should I use one big Monitor which notifies me for all changes? Or
should I use like 100 Monitors (one for each shown or cached day)?
What if I also have cached views where events are grouped by
Interpretation (or tags once available), would I also add Monitors for
each group?
I will check the last changes you made in a bit, unfortunatly this
merge proposal and the bug on launchpad are Oops'ing for me, so I have
to review the code locally. But I think we should (and could) merge
this branch ASAP.
Mikkel Kamstrup Erlandsen (kamstrup) wrote : | # |
I think a good rule of thumb is "one monitor per view".
Performance wise it should not be an issue to have something like 50 monitors on the engine side. Unless there is a mass import of events, like a long FF history.
Cases like history imports might still be cheap if all events fall out of the time range scope of the monitors.
Preview Diff
1 | === modified file '_zeitgeist/engine/Makefile.am' |
2 | --- _zeitgeist/engine/Makefile.am 2009-11-26 17:32:59 +0000 |
3 | +++ _zeitgeist/engine/Makefile.am 2009-12-05 00:14:12 +0000 |
4 | @@ -3,5 +3,6 @@ |
5 | app_PYTHON = \ |
6 | __init__.py \ |
7 | extension.py \ |
8 | + notify.py \ |
9 | resonance_engine.py \ |
10 | remote.py |
11 | |
12 | === added file '_zeitgeist/engine/notify.py' |
13 | --- _zeitgeist/engine/notify.py 1970-01-01 00:00:00 +0000 |
14 | +++ _zeitgeist/engine/notify.py 2009-12-05 00:14:12 +0000 |
15 | @@ -0,0 +1,261 @@ |
16 | +# -.- coding: utf-8 -.- |
17 | + |
18 | +# Zeitgeist |
19 | +# |
20 | +# Copyright © 2009 Mikkel Kamstrup Erlandsen <mikkel.kamstrup@gmail.com> |
21 | +# |
22 | +# This program is free software: you can redistribute it and/or modify |
23 | +# it under the terms of the GNU Lesser General Public License as published by |
24 | +# the Free Software Foundation, either version 3 of the License, or |
25 | +# (at your option) any later version. |
26 | +# |
27 | +# This program is distributed in the hope that it will be useful, |
28 | +# but WITHOUT ANY WARRANTY; without even the implied warranty of |
29 | +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the |
30 | +# GNU Lesser General Public License for more details. |
31 | +# |
32 | +# You should have received a copy of the GNU Lesser General Public License |
33 | +# along with this program. If not, see <http://www.gnu.org/licenses/>. |
34 | + |
35 | +import dbus |
36 | +import logging |
37 | + |
38 | +from zeitgeist.datamodel import TimeRange |
39 | + |
40 | +logging.basicConfig(level=logging.DEBUG) |
41 | +log = logging.getLogger("zeitgeist.notify") |
42 | + |
43 | +class _MonitorProxy (dbus.Interface): |
44 | + """ |
45 | + Connection to a org.gnome.zeitgeist.Monitor interface running on some |
46 | + client to the Zeitgeist engine. |
47 | + """ |
48 | + |
49 | + def __init__ (self, owner, monitor_path, time_range, event_templates): |
50 | + """ |
51 | + Create a new MonitorProxy for the unique DBus name *owner* on the |
52 | + path *monitor_path*. Note that the path points to an object |
53 | + living under *owner* and not necessarily inside the current |
54 | + process. |
55 | + |
56 | + :param owner: Unique DBus name of the process owning the monitor |
57 | + :param monitor_path: The DBus object path to the monitor object |
58 | + in the client process |
59 | + :param time_range: a |
60 | + :class:`TimeRange <zeitgeist.datamodel.TimeRange` instance |
61 | + the monitored events must lie within |
62 | + :param event_templates: List of event templates that any events |
63 | + must match in order to notify this monitor |
64 | + :type event_templates: list of |
65 | + :class:`Events <zeitgeist.datamodel.Event>` |
66 | + """ |
67 | + bus = dbus.SessionBus() |
68 | + proxy = bus.get_object(owner, monitor_path) |
69 | + dbus.Interface.__init__( |
70 | + self, |
71 | + proxy, |
72 | + "org.gnome.zeitgeist.Monitor" |
73 | + ) |
74 | + |
75 | + self._owner = owner |
76 | + self._path = monitor_path |
77 | + self._time_range = time_range |
78 | + self._templates = event_templates |
79 | + |
80 | + def __str__ (self): |
81 | + return "%s%s" % (self.owner, self.path) |
82 | + |
83 | + def get_owner (self) : return self._owner |
84 | + owner = property(get_owner, doc="Read only property with the unique DBus name of the process owning the monitor") |
85 | + |
86 | + def get_path (self) : return self._path |
87 | + path = property(get_path, doc="Read only property with the object path of the monitor in the process owning the monitor") |
88 | + |
89 | + def get_time_range (self) : return self._time_range |
90 | + time_range = property(get_time_range, doc="Read only property with matched :class:`TimeRange` of the monitor") |
91 | + |
92 | + def __hash__ (self): |
93 | + return hash(_MonitorProxy.hash(self._owner, self._path)) |
94 | + |
95 | + def matches (self, event): |
96 | + """ |
97 | + Returns True of this monitor has a template matching *event* |
98 | + |
99 | + :param event: The event to check against the monitor's templates |
100 | + :type event: :class:`Event <zeitgeist.datamodel.Event>` |
101 | + """ |
102 | + if not event.in_time_range(self._time_range): |
103 | + return False |
104 | + |
105 | + if len(self._templates) == 0: |
106 | + return True |
107 | + |
108 | + for tmpl in self._templates: |
109 | + if event.matches_template(tmpl) : return True |
110 | + return False |
111 | + |
112 | + def notify_insert (self, time_range, events): |
113 | + """ |
114 | + Asynchronously notify the monitor that a collection of matching events has been inserted |
115 | + |
116 | + The events will *not* be filtered through the :meth:`matches` |
117 | + method. It is the responsability of the caller to do that. |
118 | + """ |
119 | + for ev in events : ev._make_dbus_sendable() |
120 | + self.NotifyInsert(time_range, events, |
121 | + reply_handler=self._notify_reply_handler, |
122 | + error_handler=self._notify_error_handler) |
123 | + |
124 | + def notify_delete (self, time_range, event_ids): |
125 | + """ |
126 | + Asynchronously notify the monitor that a collection of events has been deleted |
127 | + |
128 | + Only the event ids are passed back to the monitor. Note this |
129 | + method does not check that *time_range* is within the monitor's |
130 | + time range. That is the responsibility of the caller |
131 | + """ |
132 | + self.NotifyDelete(time_range, event_ids, |
133 | + reply_handler=self._notify_reply_handler, |
134 | + error_handler=self._notify_error_handler) |
135 | + |
136 | + @staticmethod |
137 | + def hash(owner, path): |
138 | + """ |
139 | + Calculate an integer uniquely identifying the monitor based on |
140 | + the DBus name of the owner and object path of the monitor itself |
141 | + """ |
142 | + return hash("%s#%s" % (owner, path)) |
143 | + |
144 | + def _notify_reply_handler (self): |
145 | + """ |
146 | + Async reply handler for invoking Notify() over DBus |
147 | + """ |
148 | + pass |
149 | + |
150 | + def _notify_error_handler (self, error): |
151 | + """ |
152 | + Async error handler for invoking Notify() over DBus |
153 | + """ |
154 | + log.warn("Failed to deliver notification: %s" % error) |
155 | + |
156 | +class MonitorManager: |
157 | + |
158 | + def __init__ (self): |
159 | + self._monitors = {} # hash -> Monitor |
160 | + self._connections = {} # owner -> list of paths |
161 | + |
162 | + # Listen for disconnecting clients to clean up potential dangling monitors |
163 | + dbus.SessionBus().add_signal_receiver (self._name_owner_changed, |
164 | + signal_name="NameOwnerChanged", |
165 | + dbus_interface=dbus.BUS_DAEMON_IFACE, |
166 | + arg2="") |
167 | + |
168 | + def install_monitor (self, owner, monitor_path, time_range, event_templates): |
169 | + """ |
170 | + Install a :class:`MonitorProxy` and set it up to receive |
171 | + notifications when events are pushed into the :meth;`dispatch` |
172 | + method. |
173 | + |
174 | + Monitors will automatically be cleaned up if :const:`monitor.owner` |
175 | + disconnects from the bus. To manually remove a monitor call the |
176 | + :meth:`remove_monitor` on this object passing in |
177 | + :const:`monitor.owner` and :const:`monitor.path`. |
178 | + |
179 | + :param owner: Unique DBus name of the process owning the monitor |
180 | + :type owner: string |
181 | + :param monitor_path: The DBus object path for the monitor object |
182 | + in the client process |
183 | + :type monitor_path: String or :class:`dbus.ObjectPath` |
184 | + :param time_range: a |
185 | + :class:`TimeRange <zeitgeist.datamodel.TimeRange` instance |
186 | + the monitored events must lie within |
187 | + :param event_templates: A list of |
188 | + :class:`Event <zeitgeist.datamodel.Event>` templates to match |
189 | + :returns: This method has no return value |
190 | + """ |
191 | + # Check that we don't already have the monitor, so we don't |
192 | + # needlesly set up a full dbus Proxy |
193 | + monitor_key = _MonitorProxy.hash(owner, monitor_path) |
194 | + if monitor_key in self._monitors: |
195 | + raise KeyError("Monitor for %s already installed at path %s" % (owner, monitor_path)) |
196 | + |
197 | + monitor = _MonitorProxy(owner, monitor_path, time_range, event_templates) |
198 | + self._monitors[monitor_key] = monitor |
199 | + |
200 | + if not monitor.owner in self._connections: |
201 | + self._connections[owner] = set() |
202 | + |
203 | + self._connections[owner].add(monitor.path) |
204 | + |
205 | + def remove_monitor (self, owner, monitor_path): |
206 | + """ |
207 | + Remove an installed monitor. |
208 | + |
209 | + :param owner: Unique DBus name of the process owning the monitor |
210 | + :type owner: string |
211 | + :param monitor_path: The DBus object path for the monitor object |
212 | + in the client process |
213 | + :type monitor_path: String or :class:`dbus.ObjectPath` |
214 | + """ |
215 | + log.debug("Removing monitor %s%s", owner, monitor_path) |
216 | + mon = self._monitors.pop(_MonitorProxy.hash(owner, monitor_path)) |
217 | + |
218 | + if not mon: |
219 | + raise KeyError("Unknown monitor %s for owner %s" % (monitor_path, owner)) |
220 | + |
221 | + conn = self._connections[owner] |
222 | + if conn : conn.remove(monitor_path) |
223 | + |
224 | + def notify_insert (self, time_range, events): |
225 | + """ |
226 | + Send events to matching monitors. |
227 | + The monitors will only be notified about the events for which |
228 | + they have a matching template, ie. :meth:`MonitorProxy.matches` |
229 | + returns True. |
230 | + |
231 | + :param events: The events to check against the monitor templates |
232 | + :type events: list of :class:`Events <zeitgeist.datamodel.Event>` |
233 | + """ |
234 | + for mon in self._monitors.itervalues(): |
235 | + log.debug("Checking monitor %s" % mon) |
236 | + matching_events = [] |
237 | + |
238 | + for ev in events: |
239 | + if mon.matches(ev): |
240 | + matching_events.append(ev) |
241 | + |
242 | + if matching_events : |
243 | + log.debug("Notifying %s about %s insertions" % (mon, len(matching_events))) |
244 | + mon.notify_insert(time_range.intersect(mon.time_range), matching_events) |
245 | + |
246 | + def notify_delete (self, time_range, event_ids): |
247 | + """ |
248 | + Notify monitors with matching time ranges that the events with the given ids have been deleted from the log |
249 | + |
250 | + :param time_range: A TimeRange instance that spans the deleted events |
251 | + :param event_ids: A list of event ids |
252 | + """ |
253 | + for mon in self._monitors.itervalues(): |
254 | + log.debug("Checking monitor %s" % mon) |
255 | + time_range = mon.time_range.intersect(time_range) |
256 | + |
257 | + if time_range: |
258 | + log.debug("Notifying %s about %s deletions" % (mon, len(event_ids))) |
259 | + mon.notify_delete(time_range, event_ids) |
260 | + |
261 | + def _name_owner_changed (self, owner, old, new): |
262 | + """ |
263 | + Clean up monitors for processes disconnecting from the bus |
264 | + """ |
265 | + # Don't proceed if this is a disconnect of an unknown connection |
266 | + if not owner in self._connections : |
267 | + return |
268 | + |
269 | + conn = self._connections[owner] |
270 | + |
271 | + log.debug("Client disconnected %s" % owner) |
272 | + for path in conn: |
273 | + self.remove_monitor(Monitor.hash(owner, path)) |
274 | + |
275 | + self._connections.pop(owner) |
276 | + |
277 | |
278 | === modified file '_zeitgeist/engine/remote.py' |
279 | --- _zeitgeist/engine/remote.py 2009-12-01 10:50:16 +0000 |
280 | +++ _zeitgeist/engine/remote.py 2009-12-05 00:14:12 +0000 |
281 | @@ -3,6 +3,7 @@ |
282 | # Zeitgeist |
283 | # |
284 | # Copyright © 2009 Siegfried-Angel Gevatter Pujals <rainct@ubuntu.com> |
285 | +# Copyright © 2009 Mikkel Kamstrup Erlandsen <mikkel.kamstrup@gmail.com> |
286 | # |
287 | # This program is free software: you can redistribute it and/or modify |
288 | # it under the terms of the GNU Lesser General Public License as published by |
289 | @@ -21,7 +22,9 @@ |
290 | import dbus.service |
291 | import logging |
292 | |
293 | +from zeitgeist.datamodel import Event, Subject, TimeRange, StorageState, ResultType |
294 | from _zeitgeist.engine import get_default_engine |
295 | +from _zeitgeist.engine.notify import MonitorManager |
296 | from zeitgeist.client import ZeitgeistDBusInterface |
297 | from _zeitgeist.singleton import SingletonApplication |
298 | |
299 | @@ -30,23 +33,8 @@ |
300 | DBUS_INTERFACE = ZeitgeistDBusInterface.INTERFACE_NAME |
301 | SIG_EVENT = "asaasay" |
302 | |
303 | -def special_str(obj): |
304 | - """ Return a string representation of obj |
305 | - If obj is None, return an empty string. |
306 | - """ |
307 | - return unicode(obj) if obj is not None else "" |
308 | - |
309 | -def make_dbus_sendable(event): |
310 | - for n, value in enumerate(event[0]): |
311 | - event[0][n] = special_str(value) |
312 | - for subject in event[1]: |
313 | - for n, value in enumerate(subject): |
314 | - subject[n] = special_str(value) |
315 | - event[2] = special_str(event[2]) |
316 | - return event |
317 | - |
318 | class RemoteInterface(SingletonApplication): |
319 | - |
320 | + |
321 | _dbus_properties = { |
322 | "version": property(lambda self: (0, 2, 99)), |
323 | } |
324 | @@ -56,6 +44,7 @@ |
325 | def __init__(self, start_dbus=True, mainloop=None): |
326 | SingletonApplication.__init__(self) |
327 | self._mainloop = mainloop |
328 | + self._notifications = MonitorManager() |
329 | |
330 | # Reading stuff |
331 | |
332 | @@ -82,7 +71,8 @@ |
333 | except ValueError: |
334 | # This is what we want, it means that there are no |
335 | # holes in the list |
336 | - return [make_dbus_sendable(event) for event in events] |
337 | + for ev in events : ev._make_dbus_sendable() |
338 | + return events |
339 | |
340 | @dbus.service.method(DBUS_INTERFACE, |
341 | in_signature="(xx)a("+SIG_EVENT+")uuu", out_signature="au") |
342 | @@ -128,6 +118,10 @@ |
343 | def InsertEvents(self, events): |
344 | """Inserts events into the log. Returns an array containing the ids of the inserted events |
345 | |
346 | + Any monitors with matching templates will get notified about |
347 | + the insertion. Note that the monitors are notified *after* the |
348 | + events have been inserted. |
349 | + |
350 | :param events: List of events to be inserted in the log. |
351 | If you are using the Python bindings you may pass |
352 | :class:`Event <zeitgeist.datamodel.Event>` instances |
353 | @@ -137,7 +131,23 @@ |
354 | the id of the existing event will be returned |
355 | :rtype: Array of unsigned 32 bits integers. DBus signature au. |
356 | """ |
357 | - return _engine.insert_events(events) |
358 | + if not events : return [] |
359 | + |
360 | + event_ids = _engine.insert_events(events) |
361 | + |
362 | + # FIXME: Filter out duplicate- or failed event insertions //kamstrup |
363 | + _events = [] |
364 | + min_stamp = events[0][0][Event.Timestamp] |
365 | + max_stamp = min_stamp |
366 | + for ev, ev_id in zip(events, event_ids): |
367 | + _ev = Event(ev) |
368 | + _ev[0][Event.Id] = ev_id |
369 | + _events.append(_ev) |
370 | + min_stamp = min(min_stamp, _ev.timestamp) |
371 | + max_stamp = max(max_stamp, _ev.timestamp) |
372 | + self._notifications.notify_insert(TimeRange(min_stamp, max_stamp), _events) |
373 | + |
374 | + return event_ids |
375 | |
376 | @dbus.service.method(DBUS_INTERFACE, in_signature="au", out_signature="") |
377 | def DeleteEvents(self, ids): |
378 | @@ -147,7 +157,9 @@ |
379 | :meth:`FindEventIds` |
380 | :type ids: list of integers |
381 | """ |
382 | - _engine.delete_events(ids) |
383 | + # FIXME: Notify monitors - how do we do this? //kamstrup |
384 | + min_stamp, max_stamp = _engine.delete_events(ids) |
385 | + self._notifications.notify_delete(TimeRange(min_stamp, max_stamp), ids) |
386 | |
387 | @dbus.service.method(DBUS_INTERFACE, in_signature="", out_signature="") |
388 | def DeleteLog(self): |
389 | @@ -166,8 +178,13 @@ |
390 | as it will affect all applications using Zeitgeist""" |
391 | if self._mainloop: |
392 | self._mainloop.quit() |
393 | +<<<<<<< TREE |
394 | |
395 | # Properties interface |
396 | +======= |
397 | + |
398 | + # Properties interface |
399 | +>>>>>>> MERGE-SOURCE |
400 | |
401 | @dbus.service.method(dbus_interface=dbus.PROPERTIES_IFACE, |
402 | in_signature="ss", out_signature="v") |
403 | @@ -190,3 +207,16 @@ |
404 | def GetAll(self, interface_name): |
405 | return dict((k, v.fget(self)) for (k,v) in self._dbus_properties.items()) |
406 | |
407 | + # Notifications interface |
408 | + |
409 | + @dbus.service.method(DBUS_INTERFACE, |
410 | + in_signature="o(xx)a("+SIG_EVENT+")", sender_keyword="owner") |
411 | + def InstallMonitor(self, monitor_path, time_range, event_templates, owner=None): |
412 | + event_templates = map(Event, event_templates) |
413 | + time_range = TimeRange(time_range[0], time_range[1]) |
414 | + self._notifications.install_monitor(owner, monitor_path, time_range, event_templates) |
415 | + |
416 | + @dbus.service.method(DBUS_INTERFACE, |
417 | + in_signature="o", sender_keyword="owner") |
418 | + def RemoveMonitor(self, monitor_path, owner=None): |
419 | + self._notifications.remove_monitor(owner, monitor_path) |
420 | |
421 | === modified file '_zeitgeist/engine/resonance_engine.py' |
422 | --- _zeitgeist/engine/resonance_engine.py 2009-12-01 22:26:22 +0000 |
423 | +++ _zeitgeist/engine/resonance_engine.py 2009-12-05 00:14:12 +0000 |
424 | @@ -463,9 +463,19 @@ |
425 | return id |
426 | |
427 | def delete_events (self, ids): |
428 | + # Extract min and max timestamps for deleted events |
429 | + self._cursor.execute(""" |
430 | + SELECT MIN(timestamp), MAX(timestamp) |
431 | + FROM event |
432 | + WHERE id IN (%s) |
433 | + """ % ",".join(["?"] * len(ids)), ids) |
434 | + min_stamp, max_stamp = self._cursor.fetchone() |
435 | + |
436 | # FIXME: Delete unused interpretation/manifestation/text/etc. |
437 | self._cursor.execute("DELETE FROM event WHERE id IN (%s)" |
438 | % ",".join(["?"] * len(ids)), ids) |
439 | + |
440 | + return min_stamp, max_stamp |
441 | |
442 | def _ensure_event_wrapping(self, event): |
443 | """ |
444 | |
445 | === modified file 'doc/dbus/source/index.rst' |
446 | --- doc/dbus/source/index.rst 2009-11-30 21:46:25 +0000 |
447 | +++ doc/dbus/source/index.rst 2009-12-05 00:14:12 +0000 |
448 | @@ -68,6 +68,12 @@ |
449 | .. autoclass:: ZeitgeistClient |
450 | :members: |
451 | |
452 | +Monitor |
453 | ++++++++ |
454 | + |
455 | +.. autoclass:: Monitor |
456 | + :members: |
457 | + |
458 | |
459 | .. module:: _zeitgeist.engine.remote |
460 | |
461 | |
462 | === modified file 'test/datamodel-test.py' |
463 | --- test/datamodel-test.py 2009-11-30 07:57:58 +0000 |
464 | +++ test/datamodel-test.py 2009-12-05 00:14:12 +0000 |
465 | @@ -7,7 +7,7 @@ |
466 | sys.path.insert(0, os.path.join(os.path.dirname(__file__), "..")) |
467 | |
468 | from zeitgeist.datamodel import * |
469 | - |
470 | +from testutils import parse_events |
471 | import unittest |
472 | |
473 | class CategoryTest (unittest.TestCase): |
474 | @@ -154,6 +154,48 @@ |
475 | |
476 | e.manifestation="ILLEGAL SNAFU" |
477 | self.assertFalse(e.matches_template(template)) |
478 | - |
479 | + |
480 | + def testTemplateFiltering(self): |
481 | + template = Event.new_for_values(interpretation="stfu:OpenEvent") |
482 | + events = parse_events("test/data/five_events.js") |
483 | + filtered_events = filter(template.matches_event, events) |
484 | + self.assertEquals(2, len(filtered_events)) |
485 | + |
486 | + def testInTimeRange(self): |
487 | + ev = Event.new_for_values(timestamp=10) |
488 | + self.assertTrue(ev.in_time_range(TimeRange(0, 20))) |
489 | + self.assertFalse(ev.in_time_range(TimeRange(0, 5))) |
490 | + self.assertFalse(ev.in_time_range(TimeRange(15, 20))) |
491 | + |
492 | +class TimeRangeTest (unittest.TestCase): |
493 | + |
494 | + def testEquality(self): |
495 | + self.assertFalse(TimeRange(0,1) == TimeRange(0,2)) |
496 | + self.assertTrue(TimeRange(0,1) == TimeRange(0,1)) |
497 | + |
498 | + def testIntersectWithEnclosing(self): |
499 | + outer = TimeRange(0, 10) |
500 | + inner = TimeRange(3,6) |
501 | + always = TimeRange.always() |
502 | + |
503 | + self.assertTrue(inner.intersect(outer) == inner) |
504 | + self.assertTrue(outer.intersect(inner) == inner) |
505 | + |
506 | + self.assertTrue(always.intersect(inner) == inner) |
507 | + self.assertTrue(inner.intersect(always) == inner) |
508 | + |
509 | + def testIntersectDisjoint(self): |
510 | + t1 = TimeRange(0, 10) |
511 | + t2 = TimeRange(20, 30) |
512 | + self.assertTrue(t1.intersect(t2) is None) |
513 | + self.assertTrue(t2.intersect(t1) is None) |
514 | + |
515 | + def testIntersectOverlap(self): |
516 | + first = TimeRange(0, 10) |
517 | + last = TimeRange(5, 15) |
518 | + |
519 | + self.assertTrue(first.intersect(last) == TimeRange(5, 10)) |
520 | + self.assertTrue(last.intersect(first) == TimeRange(5, 10)) |
521 | + |
522 | if __name__ == '__main__': |
523 | unittest.main() |
524 | |
525 | === modified file 'test/remote-test.py' |
526 | --- test/remote-test.py 2009-12-01 10:50:16 +0000 |
527 | +++ test/remote-test.py 2009-12-05 00:14:12 +0000 |
528 | @@ -111,5 +111,69 @@ |
529 | result_events = self.getEventsAndWait(ids) |
530 | self.assertEquals(len(ids), len(result_events)) |
531 | |
532 | + def testMonitorInsertEvents(self): |
533 | + result = [] |
534 | + mainloop = gobject.MainLoop() |
535 | + tmpl = Event.new_for_values(interpretation="stfu:OpenEvent") |
536 | + events = parse_events("test/data/five_events.js") |
537 | + |
538 | + def notify_insert_handler(time_range, events): |
539 | + result.extend(events) |
540 | + mainloop.quit() |
541 | + |
542 | + def notify_delete_handler(time_range, event_ids): |
543 | + mainloop.quit() |
544 | + self.fail("Unexpected delete notification") |
545 | + |
546 | + self.client.install_monitor(TimeRange.always(), [tmpl], notify_insert_handler, notify_delete_handler) |
547 | + self.client.insert_events(events) |
548 | + mainloop.run() |
549 | + |
550 | + self.assertEquals(2, len(result)) |
551 | + |
552 | + def testMonitorDeleteEvents(self): |
553 | + result = [] |
554 | + mainloop = gobject.MainLoop() |
555 | + events = parse_events("test/data/five_events.js") |
556 | + |
557 | + def notify_insert_handler(time_range, events): |
558 | + event_ids = map(lambda ev : ev.id, events) |
559 | + self.client.delete_events(event_ids) |
560 | + |
561 | + def notify_delete_handler(time_range, event_ids): |
562 | + mainloop.quit() |
563 | + result.extend(event_ids) |
564 | + |
565 | + |
566 | + self.client.install_monitor(TimeRange(125, 145), [], notify_insert_handler, notify_delete_handler) |
567 | + |
568 | + self.client.insert_events(events) |
569 | + mainloop.run() |
570 | + |
571 | + self.assertEquals(2, len(result)) |
572 | + |
573 | + def testMonitorInstallRemoval(self): |
574 | + result = [] |
575 | + mainloop = gobject.MainLoop() |
576 | + tmpl = Event.new_for_values(interpretation="stfu:OpenEvent") |
577 | + |
578 | + def notify_insert_handler(notification_type, events): |
579 | + pass |
580 | + |
581 | + def notify_delete_handler(time_range, event_ids): |
582 | + mainloop.quit() |
583 | + self.fail("Unexpected delete notification") |
584 | + |
585 | + mon = self.client.install_monitor(TimeRange.always(), [tmpl], notify_insert_handler, notify_delete_handler) |
586 | + |
587 | + def removed_handler(result_state): |
588 | + result.append(result_state) |
589 | + mainloop.quit() |
590 | + |
591 | + self.client.remove_monitor(mon, removed_handler) |
592 | + mainloop.run() |
593 | + self.assertEquals(1, len(result)) |
594 | + self.assertEquals(1, result.pop()) |
595 | + |
596 | if __name__ == "__main__": |
597 | unittest.main() |
598 | |
599 | === modified file 'zeitgeist/client.py' |
600 | --- zeitgeist/client.py 2009-12-01 10:49:34 +0000 |
601 | +++ zeitgeist/client.py 2009-12-05 00:14:12 +0000 |
602 | @@ -20,10 +20,12 @@ |
603 | # along with this program. If not, see <http://www.gnu.org/licenses/>. |
604 | |
605 | import dbus |
606 | +import dbus.service |
607 | import dbus.mainloop.glib |
608 | import logging |
609 | import os.path |
610 | import sys |
611 | +import logging |
612 | |
613 | from xml.dom.minidom import parseString as minidom_parse |
614 | |
615 | @@ -31,6 +33,11 @@ |
616 | |
617 | from zeitgeist.datamodel import Event, Subject, TimeRange, StorageState, ResultType |
618 | |
619 | +SIG_EVENT = "asaasay" |
620 | + |
621 | +logging.basicConfig(level=logging.DEBUG) |
622 | +log = logging.getLogger("zeitgeist.client") |
623 | + |
624 | class ZeitgeistDBusInterface(dbus.Interface): |
625 | """ Central DBus interface to the zeitgeist engine |
626 | |
627 | @@ -152,10 +159,103 @@ |
628 | self.INTERFACE_NAME |
629 | ) |
630 | |
631 | +class Monitor (dbus.service.Object): |
632 | + """ |
633 | + DBus object for monitoring the Zeitgeist log for certain types |
634 | + of events. |
635 | + |
636 | + Monitors are normally instantiated indirectly by calling |
637 | + :meth:`ZeitgeistClient.install_monitor`. |
638 | + |
639 | + It is important to understand that the Monitor instance lives on the |
640 | + client side, and expose a DBus service there, and the Zeitgeist engine |
641 | + calls back to the monitor when matching events are registered. |
642 | + |
643 | + For the callback signature refer to :meth:`ZeitgeistClient.install_monitor` |
644 | + """ |
645 | + |
646 | + # Used in Monitor._next_path() to generate unique path names |
647 | + _last_path_id = 0 |
648 | + |
649 | + def __init__ (self, time_range, event_templates, insert_callback, delete_callback, monitor_path=None): |
650 | + if not monitor_path: |
651 | + monitor_path = Monitor._next_path() |
652 | + elif isinstance(monitor_path, (str, unicode)): |
653 | + monitor_path = dbus.ObjectPath(monitor_path) |
654 | + |
655 | + self._time_range = time_range |
656 | + self._templates = event_templates |
657 | + self._path = monitor_path |
658 | + self._insert_callback = insert_callback |
659 | + self._delete_callback = delete_callback |
660 | + dbus.service.Object.__init__(self, dbus.SessionBus(), monitor_path) |
661 | + |
662 | + |
663 | + def get_path (self): return self._path |
664 | + path = property(get_path, doc="Read only property with the DBus path of the monitor object") |
665 | + |
666 | + def get_time_range(self): return self._time_range |
667 | + time_range = property(get_time_range, doc="Read only property with the :class:`TimeRange` matched by this monitor") |
668 | + |
669 | + def get_templates (self): return self._templates |
670 | + templates = property(get_templates, doc="Read only property with installed templates") |
671 | + |
672 | + @dbus.service.method("org.gnome.zeitgeist.Monitor", |
673 | + in_signature="(xx)a("+SIG_EVENT+")") |
674 | + def NotifyInsert(self, time_range, events): |
675 | + """ |
676 | + Receive notification that a set of events matching the monitor's |
677 | + templates has been recorded in the log. |
678 | + |
679 | + This method is the raw DBus callback and should normally not be |
680 | + overridden. Events are received via the *insert_callback* |
681 | + argument given in the constructor to this class. |
682 | + |
683 | + :param time_range: A two-tuple of 64 bit integers with the minimum |
684 | + and maximum timestamps found in *events*. DBus signature (xx) |
685 | + :param events: A list of DBus event structs, signature a(asaasay) |
686 | + with the events matching the monitor. |
687 | + See :meth:`ZeitgeistClient.install_monitor` |
688 | + """ |
689 | + self._insert_callback(TimeRange(time_range[0], time_range[1]), map(Event, events)) |
690 | + |
691 | + @dbus.service.method("org.gnome.zeitgeist.Monitor", |
692 | + in_signature="(xx)au") |
693 | + def NotifyDelete(self, time_range, event_ids): |
694 | + """ |
695 | + Receive notification that a set of events within the monitor's |
696 | + matched time range has been deleted. Note that this notification |
697 | + will also be emitted for deleted events that doesn't match the |
698 | + event templates of the monitor. It's just the time range which |
699 | + is considered here. |
700 | + |
701 | + This method is the raw DBus callback and should normally not be |
702 | + overridden. Events are received via the *delete_callback* |
703 | + argument given in the constructor to this class. |
704 | + |
705 | + :param time_range: A two-tuple of 64 bit integers with the minimum |
706 | + and maximum timestamps found in *events*. DBus signature (xx) |
707 | + :param event_ids: A list of event ids. An event id is simply |
708 | + and unsigned 32 bit integer. DBus signature au. |
709 | + """ |
710 | + self._delete_callback(TimeRange(time_range[0], time_range[1]), event_ids) |
711 | + |
712 | + def __hash__ (self): |
713 | + return hash(self._path) |
714 | + |
715 | + @classmethod |
716 | + def _next_path(cls): |
717 | + """ |
718 | + Generate a new unique DBus object path for a monitor |
719 | + """ |
720 | + cls._last_path_id += 1 |
721 | + return dbus.ObjectPath("/org/gnome/zeitgeist/monitor/%s" % cls._last_path_id) |
722 | + |
723 | class ZeitgeistClient: |
724 | """ |
725 | - Convenience APIs to have a Pythonic way to call the running Zeitgeist |
726 | - engine. For raw DBus access use the ZeitgeistDBusInterface class. |
727 | + Convenience APIs to have a Pythonic way to call and monitor the running |
728 | + Zeitgeist engine. For raw DBus access use the |
729 | + :class:`ZeitgeistDBusInterface` class. |
730 | |
731 | Note that this class only does asynchronous DBus calls. This is almost |
732 | always the right thing to do. If you really want to do synchronous |
733 | @@ -418,6 +518,125 @@ |
734 | self._iface.GetEvents(event_ids, |
735 | reply_handler=lambda raw : events_reply_handler(map(Event, raw)), |
736 | error_handler=error_handler) |
737 | + |
738 | + def delete_events(self, event_ids, reply_handler=None, error_handler=None): |
739 | + """ |
740 | + Delete a collection of events from the zeitgeist log given their |
741 | + event ids. |
742 | + |
743 | + The deletion will be done asynchronously, and this method returns |
744 | + immediately. To check whether the deletions went well supply |
745 | + the *reply_handler* and/or *error_handler* funtions. The |
746 | + reply handler should not take any argument. The error handler |
747 | + must take a single argument - being the error. |
748 | + |
749 | + With custom handlers any errors will be printed to stderr. |
750 | + |
751 | + In order to use this method there needs to be a mainloop |
752 | + runnning. |
753 | + """ |
754 | + self._check_list_or_tuple(event_ids) |
755 | + self._check_members(event_ids, int) |
756 | + |
757 | + if not callable(reply_handler): |
758 | + reply_handler = self._void_reply_handler |
759 | + if not callable(error_handler): |
760 | + error_handler = lambda err : self._stderr_error_handler(err) |
761 | + |
762 | + self._iface.DeleteEvents(event_ids, |
763 | + reply_handler=reply_handler, |
764 | + error_handler=error_handler) |
765 | + |
766 | + def install_monitor (self, time_range, event_templates, notify_insert_handler, notify_delete_handler, monitor_path=None): |
767 | + """ |
768 | + Install a monitor in the Zeitgeist engine that calls back |
769 | + when events matching *event_templates* are logged. The matching |
770 | + is done exactly as in the *find_** family of methods and in |
771 | + :meth:`Event.matches_template <zeitgeist.datamodel.Event.matches_template>`. |
772 | + Furthermore matched events must also have timestamps lying in |
773 | + *time_range*. |
774 | + |
775 | + To remove a monitor call :meth:`remove_monitor` on the returned |
776 | + :class:`Monitor` instance. |
777 | + |
778 | + The *notify_insert_handler* will be called when events matching |
779 | + the monitor are inserted into the log. The *notify_delete_handler* |
780 | + function will be called when events lying within the monitored |
781 | + time range are deleted. |
782 | + |
783 | + :param time_range: A :class:`TimeRange <zeitgeist.datamodel.TimeRange>` |
784 | + that matched events must lie within. To obtain a time range |
785 | + from now and indefinitely into the future use |
786 | + :meth:`TimeRange.from_now() <zeitgeist.datamodel.TimeRange.from_now>` |
787 | + :param event_templates: The event templates to look for |
788 | + :param notify_insert_handler: Callback for receiving notifications |
789 | + about insertions of matching events. The callback should take |
790 | + a :class:`TimeRange` as first parameter and a list of |
791 | + :class:`Events` as the second parameter. |
792 | + The time range will cover the minimum and maximum timestamps |
793 | + of the inserted events |
794 | + :param notify_delete_handler: Callback for receiving notifications |
795 | + about deletions of events in the monitored time range. |
796 | + The callback should take a :class:`TimeRange` as first |
797 | + parameter and a list of event ids as the second parameter. |
798 | + Note that an event id is simply an unsigned integer. |
799 | + :param monitor_path: Optional argument specifying the DBus path |
800 | + to install the client side monitor object on. If none is provided |
801 | + the client will provide one for you namespaced under |
802 | + /org/gnome/zeitgeist/monitor/* |
803 | + :returns: a :class:`Monitor` |
804 | + """ |
805 | + self._check_list_or_tuple(event_templates) |
806 | + self._check_members(event_templates, Event) |
807 | + if not callable(notify_insert_handler): |
808 | + raise TypeError("notify_insert_handler not callable, found %s" % notify_reply_handler) |
809 | + |
810 | + if not callable(notify_delete_handler): |
811 | + raise TypeError("notify_delete_handler not callable, found %s" % notify_reply_handler) |
812 | + |
813 | + |
814 | + mon = Monitor(time_range, event_templates, notify_insert_handler, notify_delete_handler, monitor_path=monitor_path) |
815 | + self._iface.InstallMonitor(mon.path, |
816 | + mon.time_range, |
817 | + mon.templates, |
818 | + reply_handler=self._void_reply_handler, |
819 | + error_handler=lambda err : log.warn("Error installing monitor: %s" % err)) |
820 | + return mon |
821 | + |
822 | + def remove_monitor (self, monitor, monitor_removed_handler=None): |
823 | + """ |
824 | + Remove a :class:`Monitor` installed with :meth:`install_monitor` |
825 | + |
826 | + :param monitor: Monitor to remove. Either as a :class:`Monitor` |
827 | + instance or a DBus object path to the monitor either as a |
828 | + string or :class:`dbus.ObjectPath` |
829 | + :param monitor_removed_handler: A callback function taking |
830 | + one integer argument. 1 on success, 0 on failure. |
831 | + """ |
832 | + if isinstance(monitor, (str,unicode)): |
833 | + path = dbus.ObjectPath(monitor) |
834 | + elif isinstance(monitor, Monitor): |
835 | + path = monitor.path |
836 | + else: |
837 | + raise TypeError("Monitor, str, or unicode expected. Found %s" % type(monitor)) |
838 | + |
839 | + if callable(monitor_removed_handler): |
840 | + |
841 | + def dispatch_handler (error=None): |
842 | + if error : |
843 | + log.warn("Error removing monitor %s: %s" % (monitor, error)) |
844 | + monitor_removed_handler(0) |
845 | + else: monitor_removed_handler(1) |
846 | + |
847 | + reply_handler = dispatch_handler |
848 | + error_handler = dispatch_handler |
849 | + else: |
850 | + reply_handler = self._void_reply_handler |
851 | + error_handler = lambda err : log.warn("Error removing monitor %s: %s" % (monitor, err)) |
852 | + |
853 | + self._iface.RemoveMonitor(path, |
854 | + reply_handler=reply_handler, |
855 | + error_handler=error_handler) |
856 | |
857 | def _check_list_or_tuple(self, collection): |
858 | """ |
859 | @@ -441,12 +660,14 @@ |
860 | """ |
861 | pass |
862 | |
863 | - def _stderr_error_handler(self, exception, normal_reply_handler, normal_reply_data): |
864 | + def _stderr_error_handler(self, exception, normal_reply_handler=None, normal_reply_data=None): |
865 | """ |
866 | Error handler for async DBus calls that prints the error |
867 | to sys.stderr |
868 | """ |
869 | print >> sys.stderr, "Error from Zeitgeist engine:", exception |
870 | - normal_reply_handler(normal_reply_data) |
871 | + |
872 | + if callable(normal_reply_handler): |
873 | + normal_reply_handler(normal_reply_data) |
874 | |
875 | |
876 | |
877 | === modified file 'zeitgeist/datamodel.py' |
878 | --- zeitgeist/datamodel.py 2009-12-01 13:07:14 +0000 |
879 | +++ zeitgeist/datamodel.py 2009-12-05 00:14:12 +0000 |
880 | @@ -367,8 +367,17 @@ |
881 | By design this class will be automatically transformed to the DBus |
882 | type (xx). |
883 | """ |
884 | + # Maximal value of our timestamps |
885 | + _max_stamp = 2**63 - 1 |
886 | + |
887 | def __init__ (self, begin, end): |
888 | - super(TimeRange, self).__init__((begin, end)) |
889 | + super(TimeRange, self).__init__((int(begin), int(end))) |
890 | + |
891 | + def __eq__ (self, other): |
892 | + return self.begin == other.begin and self.end == other.end |
893 | + |
894 | + def __str__ (self): |
895 | + return "(%s, %s)" % (self.begin, self.end) |
896 | |
897 | def get_begin(self): |
898 | return self[0] |
899 | @@ -389,10 +398,57 @@ |
900 | @staticmethod |
901 | def until_now(): |
902 | """ |
903 | - Return a TimeRange from 0 to the instant of invocation |
904 | + Return a :class:`TimeRange` from 0 to the instant of invocation |
905 | """ |
906 | return TimeRange(0, int(time.time()*1000)) |
907 | - |
908 | + |
909 | + @staticmethod |
910 | + def from_now(): |
911 | + """ |
912 | + Return a :class:`TimeRange` from the instant of invocation to |
913 | + the end of time |
914 | + """ |
915 | + return TimeRange(int(time.time()*1000), TimeRange._max_stamp) |
916 | + |
917 | + @staticmethod |
918 | + def always(): |
919 | + """ |
920 | + Return a :class:`TimeRange` from the furtest past to the most |
921 | + distant future |
922 | + """ |
923 | + return TimeRange(-TimeRange._max_stamp, TimeRange._max_stamp) |
924 | + |
925 | + def intersect(self, time_range): |
926 | + """ |
927 | + Return a new :class:`TimeRange` that is the intersection of the |
928 | + two time range intervals. If the intersection is empty this |
929 | + method returns :const:`None`. |
930 | + """ |
931 | + # Behold the boolean madness! |
932 | + result = TimeRange(0,0) |
933 | + if self.begin < time_range.begin: |
934 | + if self.end < time_range.begin: |
935 | + return None |
936 | + else: |
937 | + result.begin = time_range.begin |
938 | + else: |
939 | + if self.begin > time_range.end: |
940 | + return None |
941 | + else: |
942 | + result.begin = self.begin |
943 | + |
944 | + if self.end < time_range.end: |
945 | + if self.end < time_range.begin: |
946 | + return None |
947 | + else: |
948 | + result.end = self.end |
949 | + else: |
950 | + if self.begin > time_range.end: |
951 | + return None |
952 | + else: |
953 | + result.end = time_range.end |
954 | + |
955 | + return result |
956 | class StorageState: |
957 | """ |
958 | Enumeration class defining the possible values for the storage state |
959 | @@ -728,7 +784,8 @@ |
960 | doc="Read/write property with a list of :class:`Subjects <Subject>`") |
961 | |
962 | def get_id(self): |
963 | - return self[0][Event.Id] |
964 | + val = self[0][Event.Id] |
965 | + return int(val) if val else 0 |
966 | id = property(get_id, |
967 | doc="Read only property containing the the event id if the event has one") |
968 | |
969 | @@ -789,6 +846,7 @@ |
970 | data = self[0] |
971 | tdata = event_template[0] |
972 | for m in Event.Fields: |
973 | + if m == Event.Timestamp : continue |
974 | if tdata[m] and tdata[m] != data[m] : return False |
975 | |
976 | # If template has no subjects we have a match |
977 | @@ -803,3 +861,41 @@ |
978 | |
979 | # Template has subjects, but we never found a match |
980 | return False |
981 | + |
982 | + def matches_event (self, event): |
983 | + """ |
984 | + Interpret *self* as the template an match *event* against it. |
985 | + This method is the dual method of :meth:`matches_template`. |
986 | + """ |
987 | + #print "T: %s" % self |
988 | + #print "E: %s" % event |
989 | + #print "------------" |
990 | + return event.matches_template(self) |
991 | + |
992 | + def in_time_range (self, time_range): |
993 | + """ |
994 | + Check if the event timestamp lies within a :class:`TimeRange` |
995 | + """ |
996 | + t = int(self.timestamp) # The timestamp may be stored as a string |
997 | + return (t >= time_range.begin) and (t <= time_range.end) |
998 | + |
999 | + def _special_str(self, obj): |
1000 | + """ Return a string representation of obj |
1001 | + If obj is None, return an empty string. |
1002 | + """ |
1003 | + return unicode(obj) if obj is not None else "" |
1004 | + |
1005 | + def _make_dbus_sendable(self): |
1006 | + """ |
1007 | + Ensure that all fields in the event struct are non-None |
1008 | + """ |
1009 | + for n, value in enumerate(self[0]): |
1010 | + self[0][n] = self._special_str(value) |
1011 | + for subject in self[1]: |
1012 | + for n, value in enumerate(subject): |
1013 | + subject[n] = self._special_str(value) |
1014 | + # The payload require special handling, since it is binary data |
1015 | + # If there is indeed data here, we must not unicode encode it! |
1016 | + if self[2] is None: self[2] = u"" |
1017 | + |
1018 | + |
The notification/ monitoring branch looks very promising now and I'd like a more formal review now.
What I do is add two new methods to the org.gnome. zeitgeist. Log interface:
InstallMonitor(in o client_ monitor_ path, in E event_template) monitor_ path)
RemoveMonitor(in o client_
The client_monitor_path points to a client side object that exposes the DBus interface org.gnome. zeitgeist. Monitor, which only has a single method:
Notify(in aE events)
I add some convenience methods in ZeitgeistClient to set up a monitor simply by calling: client. install_ monitor( event_templates , notify_callback). Easy peasy lemon squeezy for application developers I hope.
TODO: There are two missing items - which doesn't block a merge though IMHO:
- Unit test for RemoveMonitor and that monitors for crashing clients are cleaned up
- What to do in case of duplicate events when we notify? Or what to do about insertion errors causing dropped events? Right now I hook in at the remote.py level and that makes it a nice clean cut, but I can not detect these cases... I can hook in at the resonance_engine.py level and look for these details, but the logic will become a lot more complex...