Merge lp:~ubuntuone-control-tower/ubuntu/karmic/desktopcouch/snapshots-with-packaging into lp:ubuntu/karmic/desktopcouch
- Karmic (9.10)
- snapshots-with-packaging
- Merge into karmic
Status: | Merged |
---|---|
Merged at revision: | not available |
Proposed branch: | lp:~ubuntuone-control-tower/ubuntu/karmic/desktopcouch/snapshots-with-packaging |
Merge into: | lp:ubuntu/karmic/desktopcouch |
Diff against target: |
3284 lines 37 files modified
MANIFEST.in (+3/-1) PKG-INFO (+0/-10) config/desktop-couch/compulsory-auth.ini (+0/-3) debian/changelog (+26/-0) debian/desktopcouch-tools.install (+0/-1) debian/desktopcouch.install (+4/-2) debian/python-desktopcouch-records.install (+5/-5) debian/python-desktopcouch.install (+1/-1) debian/rules (+1/-0) desktopcouch.egg-info/PKG-INFO (+0/-10) desktopcouch.egg-info/SOURCES.txt (+0/-64) desktopcouch.egg-info/dependency_links.txt (+0/-1) desktopcouch.egg-info/top_level.txt (+0/-1) desktopcouch/contacts/schema.txt (+50/-0) desktopcouch/contacts/tests/test_create.py (+0/-62) desktopcouch/local_files.py (+12/-0) desktopcouch/notes/__init__.py (+0/-19) desktopcouch/notes/record.py (+0/-31) desktopcouch/pair/couchdb_pairing/couchdb_io.py (+36/-25) desktopcouch/pair/couchdb_pairing/dbus_io.py (+42/-49) desktopcouch/pair/tests/test_couchdb_io.py (+0/-133) desktopcouch/records/couchgrid.py (+1/-18) desktopcouch/records/doc/field_registry.txt (+213/-0) desktopcouch/records/doc/records.txt (+13/-7) desktopcouch/records/server.py (+2/-1) desktopcouch/records/server_base.py (+0/-326) desktopcouch/records/tests/test_couchgrid.py (+21/-0) desktopcouch/records/tests/test_field_registry.py (+5/-1) desktopcouch/records/tests/test_record.py (+5/-0) desktopcouch/records/tests/test_server.py (+8/-0) desktopcouch/replication.py (+0/-242) desktopcouch/replication_services/__init__.py (+0/-4) desktopcouch/replication_services/example.py (+0/-26) desktopcouch/replication_services/ubuntuone.py (+0/-125) po/desktopcouch.pot (+102/-0) setup.cfg (+6/-6) setup.py (+6/-4) |
To merge this branch: | bzr merge lp:~ubuntuone-control-tower/ubuntu/karmic/desktopcouch/snapshots-with-packaging |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
James Westby (community) | Approve | ||
Review via email: mp+13209@code.launchpad.net |
Commit message
New upstream version 0.4.4.
Include compulsory-auth INI file to be secure by default.
Make debhelper warn about files not installed to some package (sort |uniq -c |grep -v 3 == errors) .
Shorten/simplify debhelper install paths using dh_install exlusions.
Update MANIFEST and setup.py for new files.
Remove buggy couchgrid selected_records property.
Make couchgrid correctly retrieve record id.
Description of the change
Chad Miller (cmiller) wrote : | # |
> The upstream tarball seems to be incomplete as discussed on IRC.
This was already presend. I don't understand why it was in the list.
desktopcouch-
Now included these.
desktopcouch-
desktopcouch/
desktopcouch/
James Westby (james-w) wrote : | # |
Looks good now.
Thanks,
James
- 9. By James Westby
-
Merging shared upstream rev into target branch.
- 10. By James Westby
-
* New upstream release.
+ Include doc "txt" and translation files in sources.
+ couchgrid does not correctly retrieve record id (LP: #447512)
+ couchgrid selected_records property is buggy and should be removed for
karmic if possible (LP: #448357)
* Include compulsory-auth INI file to be secure by default.
(LP: #438800)
* Make debhelper warn about files not installed to some package.
* Shorten debhelper install paths using dh_install exlusions.
* New upstream release:
+ couchgrid did not correctly retrieve record id (LP: #447512)
+ HTTP 401 for valid auth information when talking to couchdb over SSL
(LP: #446516)
+ Support headless apps. (LP: #428681)
+ desktopcouch-service "ValueError: dictionary update sequence..." on
stdout(LP: #446511) - 11. By James Westby
-
Upload to karmic.
Preview Diff
1 | === modified file 'MANIFEST.in' | |||
2 | --- MANIFEST.in 2009-09-23 14:22:38 +0000 | |||
3 | +++ MANIFEST.in 2009-10-12 14:29:10 +0000 | |||
4 | @@ -1,10 +1,12 @@ | |||
5 | 1 | include COPYING COPYING.LESSER README | 1 | include COPYING COPYING.LESSER README |
6 | 2 | recursive-include data *.tmpl | 2 | recursive-include data *.tmpl |
7 | 3 | include desktopcouch-pair.desktop.in | 3 | include desktopcouch-pair.desktop.in |
8 | 4 | include setup.cfg | ||
9 | 4 | include po/POTFILES.in | 5 | include po/POTFILES.in |
10 | 5 | include start-desktop-couchdb.sh | 6 | include start-desktop-couchdb.sh |
11 | 6 | include stop-desktop-couchdb.sh | 7 | include stop-desktop-couchdb.sh |
13 | 7 | include desktopcouch/records/doc/records.txt | 8 | recursive-include desktopcouch *.txt |
14 | 9 | recursive-include po *.pot | ||
15 | 8 | include bin/* | 10 | include bin/* |
16 | 9 | include docs/man/* | 11 | include docs/man/* |
17 | 10 | include MANIFEST.in MANIFEST | 12 | include MANIFEST.in MANIFEST |
18 | 11 | 13 | ||
19 | === removed file 'PKG-INFO' | |||
20 | --- PKG-INFO 2009-09-28 12:06:08 +0000 | |||
21 | +++ PKG-INFO 1970-01-01 00:00:00 +0000 | |||
22 | @@ -1,10 +0,0 @@ | |||
23 | 1 | Metadata-Version: 1.0 | ||
24 | 2 | Name: desktopcouch | ||
25 | 3 | Version: 0.4.2 | ||
26 | 4 | Summary: A Desktop CouchDB instance. | ||
27 | 5 | Home-page: https://launchpad.net/desktopcouch | ||
28 | 6 | Author: Stuart Langridge | ||
29 | 7 | Author-email: stuart.langridge@canonical.com | ||
30 | 8 | License: LGPL-3 | ||
31 | 9 | Description: UNKNOWN | ||
32 | 10 | Platform: UNKNOWN | ||
33 | 11 | 0 | ||
34 | === added directory 'config' | |||
35 | === removed directory 'config' | |||
36 | === added directory 'config/desktop-couch' | |||
37 | === removed directory 'config/desktop-couch' | |||
38 | === added file 'config/desktop-couch/compulsory-auth.ini' | |||
39 | --- config/desktop-couch/compulsory-auth.ini 1970-01-01 00:00:00 +0000 | |||
40 | +++ config/desktop-couch/compulsory-auth.ini 2009-10-12 14:29:10 +0000 | |||
41 | @@ -0,0 +1,3 @@ | |||
42 | 1 | [couch_httpd_auth] | ||
43 | 2 | require_valid_user = true | ||
44 | 3 | |||
45 | 0 | 4 | ||
46 | === removed file 'config/desktop-couch/compulsory-auth.ini' | |||
47 | --- config/desktop-couch/compulsory-auth.ini 2009-09-23 14:22:38 +0000 | |||
48 | +++ config/desktop-couch/compulsory-auth.ini 1970-01-01 00:00:00 +0000 | |||
49 | @@ -1,3 +0,0 @@ | |||
50 | 1 | [couch_httpd_auth] | ||
51 | 2 | require_valid_user = true | ||
52 | 3 | |||
53 | 4 | 0 | ||
54 | === modified file 'debian/changelog' | |||
55 | --- debian/changelog 2009-09-28 12:06:08 +0000 | |||
56 | +++ debian/changelog 2009-10-12 14:29:10 +0000 | |||
57 | @@ -1,3 +1,29 @@ | |||
58 | 1 | desktopcouch (0.4.4-0ubuntu1) UNRELEASED; urgency=low | ||
59 | 2 | |||
60 | 3 | * New upstream release. | ||
61 | 4 | + Include doc "txt" and translation files in sources. | ||
62 | 5 | + couchgrid does not correctly retrieve record id (LP: #447512) | ||
63 | 6 | + couchgrid selected_records property is buggy and should be removed for | ||
64 | 7 | karmic if possible (LP: #448357) | ||
65 | 8 | |||
66 | 9 | -- Chad MILLER <chad.miller@canonical.com> Mon, 12 Oct 2009 10:17:50 -0400 | ||
67 | 10 | |||
68 | 11 | desktopcouch (0.4.3-0ubuntu1) karmic; urgency=low | ||
69 | 12 | |||
70 | 13 | * Include compulsory-auth INI file to be secure by default. | ||
71 | 14 | (LP: #438800) | ||
72 | 15 | * Make debhelper warn about files not installed to some package. | ||
73 | 16 | * Shorten debhelper install paths using dh_install exlusions. | ||
74 | 17 | * New upstream release: | ||
75 | 18 | + couchgrid did not correctly retrieve record id (LP: #447512) | ||
76 | 19 | + HTTP 401 for valid auth information when talking to couchdb over SSL | ||
77 | 20 | (LP: #446516) | ||
78 | 21 | + Support headless apps. (LP: #428681) | ||
79 | 22 | + desktopcouch-service "ValueError: dictionary update sequence..." on | ||
80 | 23 | stdout(LP: #446511) | ||
81 | 24 | |||
82 | 25 | -- Chad Miller <chad.miller@canonical.com> Mon, 12 Oct 2009 07:02:07 -0400 | ||
83 | 26 | |||
84 | 1 | desktopcouch (0.4.2-0ubuntu1) karmic; urgency=low | 27 | desktopcouch (0.4.2-0ubuntu1) karmic; urgency=low |
85 | 2 | 28 | ||
86 | 3 | * Include missing 0.4.0 changelog entry. | 29 | * Include missing 0.4.0 changelog entry. |
87 | 4 | 30 | ||
88 | === modified file 'debian/desktopcouch-tools.install' | |||
89 | --- debian/desktopcouch-tools.install 2009-07-31 13:44:45 +0000 | |||
90 | +++ debian/desktopcouch-tools.install 2009-10-12 14:29:10 +0000 | |||
91 | @@ -1,4 +1,3 @@ | |||
92 | 1 | debian/tmp/usr/share/applications/desktopcouch-pair.desktop | 1 | debian/tmp/usr/share/applications/desktopcouch-pair.desktop |
93 | 2 | debian/tmp/usr/bin/desktopcouch-pair | 2 | debian/tmp/usr/bin/desktopcouch-pair |
94 | 3 | debian/tmp/usr/share/man/man1/desktopcouch-pair.1 | 3 | debian/tmp/usr/share/man/man1/desktopcouch-pair.1 |
95 | 4 | #debian/tmp/usr/share/locale/*/LC_MESSAGES/desktopcouch.mo | ||
96 | 5 | 4 | ||
97 | === modified file 'debian/desktopcouch.install' | |||
98 | --- debian/desktopcouch.install 2009-07-31 13:44:45 +0000 | |||
99 | +++ debian/desktopcouch.install 2009-10-12 14:29:10 +0000 | |||
100 | @@ -1,3 +1,5 @@ | |||
103 | 1 | debian/tmp/usr/share/desktopcouch | 1 | debian/tmp/etc/xdg/desktop-couch/ |
104 | 2 | debian/tmp/usr/lib/desktopcouch/desktopcouch-{stop,service} | 2 | debian/tmp/usr/share/desktopcouch/ |
105 | 3 | debian/tmp/usr/lib/desktopcouch/desktopcouch-service | ||
106 | 4 | debian/tmp/usr/lib/desktopcouch/desktopcouch-stop | ||
107 | 3 | debian/tmp/usr/share/dbus-1/services/org.desktopcouch.CouchDB.service | 5 | debian/tmp/usr/share/dbus-1/services/org.desktopcouch.CouchDB.service |
108 | 4 | 6 | ||
109 | === modified file 'debian/python-desktopcouch-records.install' | |||
110 | --- debian/python-desktopcouch-records.install 2009-09-28 12:06:08 +0000 | |||
111 | +++ debian/python-desktopcouch-records.install 2009-10-12 14:29:10 +0000 | |||
112 | @@ -1,5 +1,5 @@ | |||
118 | 1 | debian/tmp/usr/share/doc/python-desktopcouch-records/api | 1 | debian/tmp/usr/share/doc/python-desktopcouch-records/api/ |
119 | 2 | debian/tmp/usr/lib/*/*/desktopcouch/records/* | 2 | debian/tmp/usr/lib/*/*/desktopcouch/records/ |
120 | 3 | debian/tmp/usr/lib/*/*/desktopcouch/contacts/* | 3 | debian/tmp/usr/lib/*/*/desktopcouch/contacts/ |
121 | 4 | debian/tmp/usr/lib/*/*/desktopcouch/notes/* | 4 | debian/tmp/usr/lib/*/*/desktopcouch/notes/ |
122 | 5 | debian/tmp/usr/lib/*/*/desktopcouch/replication_services/* | 5 | debian/tmp/usr/lib/*/*/desktopcouch/replication_services/ |
123 | 6 | 6 | ||
124 | === modified file 'debian/python-desktopcouch.install' | |||
125 | --- debian/python-desktopcouch.install 2009-07-31 13:44:45 +0000 | |||
126 | +++ debian/python-desktopcouch.install 2009-10-12 14:29:10 +0000 | |||
127 | @@ -1,2 +1,2 @@ | |||
128 | 1 | debian/tmp/usr/lib/*/*/desktopcouch/*.py | 1 | debian/tmp/usr/lib/*/*/desktopcouch/*.py |
130 | 2 | debian/tmp/usr/lib/*/*/desktopcouch/pair/{couchdb_pairing,__init__.py} | 2 | debian/tmp/usr/lib/*/*/desktopcouch/pair/ |
131 | 3 | 3 | ||
132 | === modified file 'debian/rules' | |||
133 | --- debian/rules 2009-07-31 13:44:45 +0000 | |||
134 | +++ debian/rules 2009-10-12 14:29:10 +0000 | |||
135 | @@ -1,6 +1,7 @@ | |||
136 | 1 | #!/usr/bin/make -f | 1 | #!/usr/bin/make -f |
137 | 2 | 2 | ||
138 | 3 | DEB_PYTHON_SYSTEM := pycentral | 3 | DEB_PYTHON_SYSTEM := pycentral |
139 | 4 | DEB_DH_INSTALL_ARGS := --list-missing --exclude=/tests/ --exclude=egg-info/ | ||
140 | 4 | 5 | ||
141 | 5 | include /usr/share/cdbs/1/rules/debhelper.mk | 6 | include /usr/share/cdbs/1/rules/debhelper.mk |
142 | 6 | include /usr/share/cdbs/1/class/python-distutils.mk | 7 | include /usr/share/cdbs/1/class/python-distutils.mk |
143 | 7 | 8 | ||
144 | === removed directory 'desktopcouch.egg-info' | |||
145 | === removed file 'desktopcouch.egg-info/PKG-INFO' | |||
146 | --- desktopcouch.egg-info/PKG-INFO 2009-09-28 12:06:08 +0000 | |||
147 | +++ desktopcouch.egg-info/PKG-INFO 1970-01-01 00:00:00 +0000 | |||
148 | @@ -1,10 +0,0 @@ | |||
149 | 1 | Metadata-Version: 1.0 | ||
150 | 2 | Name: desktopcouch | ||
151 | 3 | Version: 0.4.2 | ||
152 | 4 | Summary: A Desktop CouchDB instance. | ||
153 | 5 | Home-page: https://launchpad.net/desktopcouch | ||
154 | 6 | Author: Stuart Langridge | ||
155 | 7 | Author-email: stuart.langridge@canonical.com | ||
156 | 8 | License: LGPL-3 | ||
157 | 9 | Description: UNKNOWN | ||
158 | 10 | Platform: UNKNOWN | ||
159 | 11 | 0 | ||
160 | === removed file 'desktopcouch.egg-info/SOURCES.txt' | |||
161 | --- desktopcouch.egg-info/SOURCES.txt 2009-09-23 14:22:38 +0000 | |||
162 | +++ desktopcouch.egg-info/SOURCES.txt 1970-01-01 00:00:00 +0000 | |||
163 | @@ -1,64 +0,0 @@ | |||
164 | 1 | COPYING | ||
165 | 2 | COPYING.LESSER | ||
166 | 3 | MANIFEST.in | ||
167 | 4 | README | ||
168 | 5 | desktopcouch-pair.desktop.in | ||
169 | 6 | org.desktopcouch.CouchDB.service | ||
170 | 7 | setup.cfg | ||
171 | 8 | setup.py | ||
172 | 9 | start-desktop-couchdb.sh | ||
173 | 10 | stop-desktop-couchdb.sh | ||
174 | 11 | bin/desktopcouch-pair | ||
175 | 12 | bin/desktopcouch-service | ||
176 | 13 | bin/desktopcouch-stop | ||
177 | 14 | config/desktop-couch/compulsory-auth.ini | ||
178 | 15 | contrib/mocker.py | ||
179 | 16 | data/couchdb.tmpl | ||
180 | 17 | desktopcouch/__init__.py | ||
181 | 18 | desktopcouch/local_files.py | ||
182 | 19 | desktopcouch/replication.py | ||
183 | 20 | desktopcouch/start_local_couchdb.py | ||
184 | 21 | desktopcouch/stop_local_couchdb.py | ||
185 | 22 | desktopcouch.egg-info/PKG-INFO | ||
186 | 23 | desktopcouch.egg-info/SOURCES.txt | ||
187 | 24 | desktopcouch.egg-info/dependency_links.txt | ||
188 | 25 | desktopcouch.egg-info/top_level.txt | ||
189 | 26 | desktopcouch/contacts/__init__.py | ||
190 | 27 | desktopcouch/contacts/contactspicker.py | ||
191 | 28 | desktopcouch/contacts/record.py | ||
192 | 29 | desktopcouch/contacts/testing/__init__.py | ||
193 | 30 | desktopcouch/contacts/testing/create.py | ||
194 | 31 | desktopcouch/contacts/tests/__init__.py | ||
195 | 32 | desktopcouch/contacts/tests/test_contactspicker.py | ||
196 | 33 | desktopcouch/contacts/tests/test_create.py | ||
197 | 34 | desktopcouch/contacts/tests/test_record.py | ||
198 | 35 | desktopcouch/notes/__init__.py | ||
199 | 36 | desktopcouch/notes/record.py | ||
200 | 37 | desktopcouch/pair/__init__.py | ||
201 | 38 | desktopcouch/pair/couchdb_pairing/__init__.py | ||
202 | 39 | desktopcouch/pair/couchdb_pairing/couchdb_io.py | ||
203 | 40 | desktopcouch/pair/couchdb_pairing/dbus_io.py | ||
204 | 41 | desktopcouch/pair/couchdb_pairing/network_io.py | ||
205 | 42 | desktopcouch/pair/tests/__init__.py | ||
206 | 43 | desktopcouch/pair/tests/test_couchdb_io.py | ||
207 | 44 | desktopcouch/pair/tests/test_network_io.py | ||
208 | 45 | desktopcouch/records/__init__.py | ||
209 | 46 | desktopcouch/records/couchgrid.py | ||
210 | 47 | desktopcouch/records/field_registry.py | ||
211 | 48 | desktopcouch/records/record.py | ||
212 | 49 | desktopcouch/records/server.py | ||
213 | 50 | desktopcouch/records/server_base.py | ||
214 | 51 | desktopcouch/records/doc/records.txt | ||
215 | 52 | desktopcouch/records/tests/__init__.py | ||
216 | 53 | desktopcouch/records/tests/test_couchgrid.py | ||
217 | 54 | desktopcouch/records/tests/test_field_registry.py | ||
218 | 55 | desktopcouch/records/tests/test_record.py | ||
219 | 56 | desktopcouch/records/tests/test_server.py | ||
220 | 57 | desktopcouch/replication_services/__init__.py | ||
221 | 58 | desktopcouch/replication_services/example.py | ||
222 | 59 | desktopcouch/replication_services/ubuntuone.py | ||
223 | 60 | desktopcouch/tests/__init__.py | ||
224 | 61 | desktopcouch/tests/test_local_files.py | ||
225 | 62 | desktopcouch/tests/test_start_local_couchdb.py | ||
226 | 63 | docs/man/desktopcouch-pair.1 | ||
227 | 64 | po/POTFILES.in | ||
228 | 65 | \ No newline at end of file | 0 | \ No newline at end of file |
229 | 66 | 1 | ||
230 | === removed file 'desktopcouch.egg-info/dependency_links.txt' | |||
231 | --- desktopcouch.egg-info/dependency_links.txt 2009-09-23 14:22:38 +0000 | |||
232 | +++ desktopcouch.egg-info/dependency_links.txt 1970-01-01 00:00:00 +0000 | |||
233 | @@ -1,1 +0,0 @@ | |||
234 | 1 | |||
235 | 2 | 0 | ||
236 | === removed file 'desktopcouch.egg-info/top_level.txt' | |||
237 | --- desktopcouch.egg-info/top_level.txt 2009-09-23 14:22:38 +0000 | |||
238 | +++ desktopcouch.egg-info/top_level.txt 1970-01-01 00:00:00 +0000 | |||
239 | @@ -1,1 +0,0 @@ | |||
240 | 1 | desktopcouch | ||
241 | 2 | 0 | ||
242 | === added file 'desktopcouch/contacts/schema.txt' | |||
243 | --- desktopcouch/contacts/schema.txt 1970-01-01 00:00:00 +0000 | |||
244 | +++ desktopcouch/contacts/schema.txt 2009-10-12 14:29:10 +0000 | |||
245 | @@ -0,0 +1,50 @@ | |||
246 | 1 | # Copyright 2009 Canonical Ltd. | ||
247 | 2 | # | ||
248 | 3 | # This file is part of desktopcouch-contacts. | ||
249 | 4 | # | ||
250 | 5 | # desktopcouch is free software: you can redistribute it and/or modify | ||
251 | 6 | # it under the terms of the GNU Lesser General Public License version 3 | ||
252 | 7 | # as published by the Free Software Foundation. | ||
253 | 8 | # | ||
254 | 9 | # desktopcouch is distributed in the hope that it will be useful, | ||
255 | 10 | # but WITHOUT ANY WARRANTY; without even the implied warranty of | ||
256 | 11 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the | ||
257 | 12 | # GNU Lesser General Public License for more details. | ||
258 | 13 | # | ||
259 | 14 | # You should have received a copy of the GNU Lesser General Public License | ||
260 | 15 | # along with desktopcouch. If not, see <http://www.gnu.org/licenses/>. | ||
261 | 16 | |||
262 | 17 | Schema | ||
263 | 18 | |||
264 | 19 | The proposed CouchDB contact schema is as follows: | ||
265 | 20 | |||
266 | 21 | Core fields | ||
267 | 22 | |||
268 | 23 | * record_type 'http://www.freedesktop.org/wiki/Specifications/desktopcouch/contact' | ||
269 | 24 | * first_name (string) | ||
270 | 25 | * last_name (string) | ||
271 | 26 | * birth_date (string, "YYYY-MM-DD") | ||
272 | 27 | * addresses (MergeableList of "address" dictionaries) | ||
273 | 28 | o city (string) | ||
274 | 29 | o address1 (string) | ||
275 | 30 | o address2 (string) | ||
276 | 31 | o pobox (string) | ||
277 | 32 | o state (string) | ||
278 | 33 | o country (string) | ||
279 | 34 | o postalcode (string) | ||
280 | 35 | o description (string, e.g., "Home") | ||
281 | 36 | * email_addresses (MergeableList of "emailaddress" dictionaries) | ||
282 | 37 | o address (string), | ||
283 | 38 | o description (string) | ||
284 | 39 | * phone_numbers (MergeableList of "phone number" dictionaries) | ||
285 | 40 | o number (string) | ||
286 | 41 | o description (string) | ||
287 | 42 | * application_annotations Everything else, organized per application. | ||
288 | 43 | |||
289 | 44 | Note: None of the core fields are mandatory, but applications should | ||
290 | 45 | not add any other fields at the top level of the record. Any fields | ||
291 | 46 | needed not defined here should be put under application_annotations in | ||
292 | 47 | the namespace of the application there. So for Ubuntu One: | ||
293 | 48 | |||
294 | 49 | "application_annotations": { | ||
295 | 50 | "Ubuntu One": {<Ubuntu One specific fields here>}} | ||
296 | 0 | 51 | ||
297 | === added file 'desktopcouch/contacts/tests/test_create.py' | |||
298 | --- desktopcouch/contacts/tests/test_create.py 1970-01-01 00:00:00 +0000 | |||
299 | +++ desktopcouch/contacts/tests/test_create.py 2009-10-12 14:29:10 +0000 | |||
300 | @@ -0,0 +1,62 @@ | |||
301 | 1 | # Copyright 2009 Canonical Ltd. | ||
302 | 2 | # | ||
303 | 3 | # This file is part of desktopcouch-contacts. | ||
304 | 4 | # | ||
305 | 5 | # desktopcouch is free software: you can redistribute it and/or modify | ||
306 | 6 | # it under the terms of the GNU Lesser General Public License version 3 | ||
307 | 7 | # as published by the Free Software Foundation. | ||
308 | 8 | # | ||
309 | 9 | # desktopcouch is distributed in the hope that it will be useful, | ||
310 | 10 | # but WITHOUT ANY WARRANTY; without even the implied warranty of | ||
311 | 11 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the | ||
312 | 12 | # GNU Lesser General Public License for more details. | ||
313 | 13 | # | ||
314 | 14 | # You should have received a copy of the GNU Lesser General Public License | ||
315 | 15 | # along with desktopcouch. If not, see <http://www.gnu.org/licenses/>. | ||
316 | 16 | # | ||
317 | 17 | # Authors: Nicola Larosa <nicola.larosa@canonical.com> | ||
318 | 18 | |||
319 | 19 | """ | ||
320 | 20 | Tests for the random contacts creation testing support code. | ||
321 | 21 | |||
322 | 22 | These tests depend on the specific random generation algorithm used in the | ||
323 | 23 | "random" stdlib module. | ||
324 | 24 | """ | ||
325 | 25 | |||
326 | 26 | import random | ||
327 | 27 | |||
328 | 28 | import testtools | ||
329 | 29 | |||
330 | 30 | from desktopcouch.contacts.testing import create as create | ||
331 | 31 | |||
332 | 32 | class TestCreate(testtools.TestCase): | ||
333 | 33 | """Test the random creation testing support code.""" | ||
334 | 34 | |||
335 | 35 | def test_head_or_tails(self): | ||
336 | 36 | """ | ||
337 | 37 | Test the head_or_tails function. | ||
338 | 38 | Once the rndgen algo is seeded, the first four calls to | ||
339 | 39 | create.head_or_tails will yield True, True, False, False. | ||
340 | 40 | """ | ||
341 | 41 | random.seed(0) | ||
342 | 42 | self.assert_(create.head_or_tails()) | ||
343 | 43 | self.assert_(create.head_or_tails()) | ||
344 | 44 | self.assertFalse(create.head_or_tails()) | ||
345 | 45 | self.assertFalse(create.head_or_tails()) | ||
346 | 46 | |||
347 | 47 | def test_random_bools(self): | ||
348 | 48 | """ | ||
349 | 49 | Test the random_bools function. See the doc for the head_or_tails test. | ||
350 | 50 | """ | ||
351 | 51 | self.assertRaises(RuntimeError, create.random_bools, 1) | ||
352 | 52 | random.seed(0) | ||
353 | 53 | self.assertEqual(len(create.random_bools(2)), 2) # [True, True] | ||
354 | 54 | self.assert_(any(create.random_bools(2))) # orig.: [False, False] | ||
355 | 55 | random.seed(0) | ||
356 | 56 | create.random_bools(2) # [True, True] | ||
357 | 57 | self.assertFalse(any( | ||
358 | 58 | create.random_bools(2, at_least_one_true=False))) # [False, False] | ||
359 | 59 | |||
360 | 60 | def test_create_many_contacts(self): | ||
361 | 61 | """Run the create_many_contacts function.""" | ||
362 | 62 | create.create_many_contacts() | ||
363 | 0 | 63 | ||
364 | === removed file 'desktopcouch/contacts/tests/test_create.py' | |||
365 | --- desktopcouch/contacts/tests/test_create.py 2009-09-23 14:22:38 +0000 | |||
366 | +++ desktopcouch/contacts/tests/test_create.py 1970-01-01 00:00:00 +0000 | |||
367 | @@ -1,62 +0,0 @@ | |||
368 | 1 | # Copyright 2009 Canonical Ltd. | ||
369 | 2 | # | ||
370 | 3 | # This file is part of desktopcouch-contacts. | ||
371 | 4 | # | ||
372 | 5 | # desktopcouch is free software: you can redistribute it and/or modify | ||
373 | 6 | # it under the terms of the GNU Lesser General Public License version 3 | ||
374 | 7 | # as published by the Free Software Foundation. | ||
375 | 8 | # | ||
376 | 9 | # desktopcouch is distributed in the hope that it will be useful, | ||
377 | 10 | # but WITHOUT ANY WARRANTY; without even the implied warranty of | ||
378 | 11 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the | ||
379 | 12 | # GNU Lesser General Public License for more details. | ||
380 | 13 | # | ||
381 | 14 | # You should have received a copy of the GNU Lesser General Public License | ||
382 | 15 | # along with desktopcouch. If not, see <http://www.gnu.org/licenses/>. | ||
383 | 16 | # | ||
384 | 17 | # Authors: Nicola Larosa <nicola.larosa@canonical.com> | ||
385 | 18 | |||
386 | 19 | """ | ||
387 | 20 | Tests for the random contacts creation testing support code. | ||
388 | 21 | |||
389 | 22 | These tests depend on the specific random generation algorithm used in the | ||
390 | 23 | "random" stdlib module. | ||
391 | 24 | """ | ||
392 | 25 | |||
393 | 26 | import random | ||
394 | 27 | |||
395 | 28 | import testtools | ||
396 | 29 | |||
397 | 30 | from desktopcouch.contacts.testing import create as create | ||
398 | 31 | |||
399 | 32 | class TestCreate(testtools.TestCase): | ||
400 | 33 | """Test the random creation testing support code.""" | ||
401 | 34 | |||
402 | 35 | def test_head_or_tails(self): | ||
403 | 36 | """ | ||
404 | 37 | Test the head_or_tails function. | ||
405 | 38 | Once the rndgen algo is seeded, the first four calls to | ||
406 | 39 | create.head_or_tails will yield True, True, False, False. | ||
407 | 40 | """ | ||
408 | 41 | random.seed(0) | ||
409 | 42 | self.assert_(create.head_or_tails()) | ||
410 | 43 | self.assert_(create.head_or_tails()) | ||
411 | 44 | self.assertFalse(create.head_or_tails()) | ||
412 | 45 | self.assertFalse(create.head_or_tails()) | ||
413 | 46 | |||
414 | 47 | def test_random_bools(self): | ||
415 | 48 | """ | ||
416 | 49 | Test the random_bools function. See the doc for the head_or_tails test. | ||
417 | 50 | """ | ||
418 | 51 | self.assertRaises(RuntimeError, create.random_bools, 1) | ||
419 | 52 | random.seed(0) | ||
420 | 53 | self.assertEqual(len(create.random_bools(2)), 2) # [True, True] | ||
421 | 54 | self.assert_(any(create.random_bools(2))) # orig.: [False, False] | ||
422 | 55 | random.seed(0) | ||
423 | 56 | create.random_bools(2) # [True, True] | ||
424 | 57 | self.assertFalse(any( | ||
425 | 58 | create.random_bools(2, at_least_one_true=False))) # [False, False] | ||
426 | 59 | |||
427 | 60 | def test_create_many_contacts(self): | ||
428 | 61 | """Run the create_many_contacts function.""" | ||
429 | 62 | create.create_many_contacts() | ||
430 | 63 | 0 | ||
431 | === modified file 'desktopcouch/local_files.py' | |||
432 | --- desktopcouch/local_files.py 2009-09-23 14:22:38 +0000 | |||
433 | +++ desktopcouch/local_files.py 2009-10-12 14:29:10 +0000 | |||
434 | @@ -136,6 +136,8 @@ | |||
435 | 136 | 136 | ||
436 | 137 | def set_bind_address(address, config_file_name=FILE_INI): | 137 | def set_bind_address(address, config_file_name=FILE_INI): |
437 | 138 | c = configparser.SafeConfigParser() | 138 | c = configparser.SafeConfigParser() |
438 | 139 | # monkeypatch ConfigParser to stop it lower-casing option names | ||
439 | 140 | c.optionxform = lambda s: s | ||
440 | 139 | c.read(config_file_name) | 141 | c.read(config_file_name) |
441 | 140 | if not c.has_section("httpd"): | 142 | if not c.has_section("httpd"): |
442 | 141 | c.add_section("httpd") | 143 | c.add_section("httpd") |
443 | @@ -147,3 +149,13 @@ | |||
444 | 147 | # You will need to add -b or -k on the end of this | 149 | # You will need to add -b or -k on the end of this |
445 | 148 | COUCH_EXEC_COMMAND = [COUCH_EXE, couch_chain_ini_files(), '-p', FILE_PID, | 150 | COUCH_EXEC_COMMAND = [COUCH_EXE, couch_chain_ini_files(), '-p', FILE_PID, |
446 | 149 | '-o', FILE_STDOUT, '-e', FILE_STDERR] | 151 | '-o', FILE_STDOUT, '-e', FILE_STDERR] |
447 | 152 | |||
448 | 153 | |||
449 | 154 | # Set appropriate permissions on relevant files and folders | ||
450 | 155 | for fn in [FILE_PID, FILE_STDOUT, FILE_STDERR, FILE_INI, FILE_LOG]: | ||
451 | 156 | if os.path.exists(fn): | ||
452 | 157 | os.chmod(fn, 0600) | ||
453 | 158 | for dn in [rootdir, config_dir, DIR_DB]: | ||
454 | 159 | if os.path.isdir(dn): | ||
455 | 160 | os.chmod(dn, 0700) | ||
456 | 161 | |||
457 | 150 | 162 | ||
458 | === added directory 'desktopcouch/notes' | |||
459 | === removed directory 'desktopcouch/notes' | |||
460 | === added file 'desktopcouch/notes/__init__.py' | |||
461 | --- desktopcouch/notes/__init__.py 1970-01-01 00:00:00 +0000 | |||
462 | +++ desktopcouch/notes/__init__.py 2009-10-12 14:29:10 +0000 | |||
463 | @@ -0,0 +1,19 @@ | |||
464 | 1 | # Copyright 2009 Canonical Ltd. | ||
465 | 2 | # | ||
466 | 3 | # This file is part of desktopcouch-notes. | ||
467 | 4 | # | ||
468 | 5 | # desktopcouch is free software: you can redistribute it and/or modify | ||
469 | 6 | # it under the terms of the GNU Lesser General Public License version 3 | ||
470 | 7 | # as published by the Free Software Foundation. | ||
471 | 8 | # | ||
472 | 9 | # desktopcouch is distributed in the hope that it will be useful, | ||
473 | 10 | # but WITHOUT ANY WARRANTY; without even the implied warranty of | ||
474 | 11 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the | ||
475 | 12 | # GNU Lesser General Public License for more details. | ||
476 | 13 | # | ||
477 | 14 | # You should have received a copy of the GNU Lesser General Public License | ||
478 | 15 | # along with desktopcouch. If not, see <http://www.gnu.org/licenses/>. | ||
479 | 16 | # | ||
480 | 17 | # Authors: Rodrigo Moya <rodrigo.moya@canonical.com> | ||
481 | 18 | |||
482 | 19 | """UbuntuOne Notes API""" | ||
483 | 0 | 20 | ||
484 | === removed file 'desktopcouch/notes/__init__.py' | |||
485 | --- desktopcouch/notes/__init__.py 2009-09-23 14:22:38 +0000 | |||
486 | +++ desktopcouch/notes/__init__.py 1970-01-01 00:00:00 +0000 | |||
487 | @@ -1,19 +0,0 @@ | |||
488 | 1 | # Copyright 2009 Canonical Ltd. | ||
489 | 2 | # | ||
490 | 3 | # This file is part of desktopcouch-notes. | ||
491 | 4 | # | ||
492 | 5 | # desktopcouch is free software: you can redistribute it and/or modify | ||
493 | 6 | # it under the terms of the GNU Lesser General Public License version 3 | ||
494 | 7 | # as published by the Free Software Foundation. | ||
495 | 8 | # | ||
496 | 9 | # desktopcouch is distributed in the hope that it will be useful, | ||
497 | 10 | # but WITHOUT ANY WARRANTY; without even the implied warranty of | ||
498 | 11 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the | ||
499 | 12 | # GNU Lesser General Public License for more details. | ||
500 | 13 | # | ||
501 | 14 | # You should have received a copy of the GNU Lesser General Public License | ||
502 | 15 | # along with desktopcouch. If not, see <http://www.gnu.org/licenses/>. | ||
503 | 16 | # | ||
504 | 17 | # Authors: Rodrigo Moya <rodrigo.moya@canonical.com> | ||
505 | 18 | |||
506 | 19 | """UbuntuOne Notes API""" | ||
507 | 20 | 0 | ||
508 | === added file 'desktopcouch/notes/record.py' | |||
509 | --- desktopcouch/notes/record.py 1970-01-01 00:00:00 +0000 | |||
510 | +++ desktopcouch/notes/record.py 2009-10-12 14:29:10 +0000 | |||
511 | @@ -0,0 +1,31 @@ | |||
512 | 1 | # Copyright 2009 Canonical Ltd. | ||
513 | 2 | # | ||
514 | 3 | # This file is part of desktopcouch-notes. | ||
515 | 4 | # | ||
516 | 5 | # desktopcouch is free software: you can redistribute it and/or modify | ||
517 | 6 | # it under the terms of the GNU Lesser General Public License version 3 | ||
518 | 7 | # as published by the Free Software Foundation. | ||
519 | 8 | # | ||
520 | 9 | # desktopcouch is distributed in the hope that it will be useful, | ||
521 | 10 | # but WITHOUT ANY WARRANTY; without even the implied warranty of | ||
522 | 11 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the | ||
523 | 12 | # GNU Lesser General Public License for more details. | ||
524 | 13 | # | ||
525 | 14 | # You should have received a copy of the GNU Lesser General Public License | ||
526 | 15 | # along with desktopcouch. If not, see <http://www.gnu.org/licenses/>. | ||
527 | 16 | # | ||
528 | 17 | # Authors: Rodrigo Moya <rodrigo.moya@canonical.com> | ||
529 | 18 | |||
530 | 19 | |||
531 | 20 | """A dictionary based note record representation.""" | ||
532 | 21 | |||
533 | 22 | from desktopcouch.records.record import Record | ||
534 | 23 | |||
535 | 24 | NOTE_RECORD_TYPE = 'http://www.freedesktop.org/wiki/Specifications/desktopcouch/note' | ||
536 | 25 | |||
537 | 26 | class Note(Record): | ||
538 | 27 | """An Ubuntuone Note Record.""" | ||
539 | 28 | |||
540 | 29 | def __init__(self, data=None, record_id=None): | ||
541 | 30 | super(Note, self).__init__( | ||
542 | 31 | record_id=record_id, data=data, record_type=NOTE_RECORD_TYPE) | ||
543 | 0 | 32 | ||
544 | === removed file 'desktopcouch/notes/record.py' | |||
545 | --- desktopcouch/notes/record.py 2009-09-23 14:22:38 +0000 | |||
546 | +++ desktopcouch/notes/record.py 1970-01-01 00:00:00 +0000 | |||
547 | @@ -1,31 +0,0 @@ | |||
548 | 1 | # Copyright 2009 Canonical Ltd. | ||
549 | 2 | # | ||
550 | 3 | # This file is part of desktopcouch-notes. | ||
551 | 4 | # | ||
552 | 5 | # desktopcouch is free software: you can redistribute it and/or modify | ||
553 | 6 | # it under the terms of the GNU Lesser General Public License version 3 | ||
554 | 7 | # as published by the Free Software Foundation. | ||
555 | 8 | # | ||
556 | 9 | # desktopcouch is distributed in the hope that it will be useful, | ||
557 | 10 | # but WITHOUT ANY WARRANTY; without even the implied warranty of | ||
558 | 11 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the | ||
559 | 12 | # GNU Lesser General Public License for more details. | ||
560 | 13 | # | ||
561 | 14 | # You should have received a copy of the GNU Lesser General Public License | ||
562 | 15 | # along with desktopcouch. If not, see <http://www.gnu.org/licenses/>. | ||
563 | 16 | # | ||
564 | 17 | # Authors: Rodrigo Moya <rodrigo.moya@canonical.com> | ||
565 | 18 | |||
566 | 19 | |||
567 | 20 | """A dictionary based note record representation.""" | ||
568 | 21 | |||
569 | 22 | from desktopcouch.records.record import Record | ||
570 | 23 | |||
571 | 24 | NOTE_RECORD_TYPE = 'http://www.freedesktop.org/wiki/Specifications/desktopcouch/note' | ||
572 | 25 | |||
573 | 26 | class Note(Record): | ||
574 | 27 | """An Ubuntuone Note Record.""" | ||
575 | 28 | |||
576 | 29 | def __init__(self, data=None, record_id=None): | ||
577 | 30 | super(Note, self).__init__( | ||
578 | 31 | record_id=record_id, data=data, record_type=NOTE_RECORD_TYPE) | ||
579 | 32 | 0 | ||
580 | === modified file 'desktopcouch/pair/couchdb_pairing/couchdb_io.py' | |||
581 | --- desktopcouch/pair/couchdb_pairing/couchdb_io.py 2009-09-28 12:06:08 +0000 | |||
582 | +++ desktopcouch/pair/couchdb_pairing/couchdb_io.py 2009-10-12 14:29:10 +0000 | |||
583 | @@ -18,6 +18,7 @@ | |||
584 | 18 | """Communicate with CouchDB.""" | 18 | """Communicate with CouchDB.""" |
585 | 19 | 19 | ||
586 | 20 | import logging | 20 | import logging |
587 | 21 | import urllib | ||
588 | 21 | 22 | ||
589 | 22 | from desktopcouch import find_port as desktopcouch_find_port | 23 | from desktopcouch import find_port as desktopcouch_find_port |
590 | 23 | from desktopcouch.records import server | 24 | from desktopcouch.records import server |
591 | @@ -25,6 +26,7 @@ | |||
592 | 25 | import socket | 26 | import socket |
593 | 26 | import uuid | 27 | import uuid |
594 | 27 | import datetime | 28 | import datetime |
595 | 29 | import urllib | ||
596 | 28 | 30 | ||
597 | 29 | RECTYPE_BASE = "http://www.freedesktop.org/wiki/Specifications/desktopcouch/" | 31 | RECTYPE_BASE = "http://www.freedesktop.org/wiki/Specifications/desktopcouch/" |
598 | 30 | PAIRED_SERVER_RECORD_TYPE = RECTYPE_BASE + "paired_server" | 32 | PAIRED_SERVER_RECORD_TYPE = RECTYPE_BASE + "paired_server" |
599 | @@ -33,10 +35,15 @@ | |||
600 | 33 | def mkuri(hostname, port, has_ssl=False, path="", auth_pair=None): | 35 | def mkuri(hostname, port, has_ssl=False, path="", auth_pair=None): |
601 | 34 | """Create a URI from parts.""" | 36 | """Create a URI from parts.""" |
602 | 35 | protocol = "https" if has_ssl else "http" | 37 | protocol = "https" if has_ssl else "http" |
607 | 36 | auth = (":".join(map(urllib.quote, auth_pair) + "@")) if auth_pair else "" | 38 | if auth_pair: |
608 | 37 | port = int(port) | 39 | auth = (":".join(map(urllib.quote, auth_pair)) + "@") |
609 | 38 | uri = "%(protocol)s://%(auth)s%(hostname)s:%(port)d/%(path)s" % locals() | 40 | else: |
610 | 39 | return uri | 41 | auth = "" |
611 | 42 | if (protocol, port) in (("http", 80), ("https", 443)): | ||
612 | 43 | return "%s://%s%s/%s" % (protocol, auth, hostname, path) | ||
613 | 44 | else: | ||
614 | 45 | port = str(port) | ||
615 | 46 | return "%s://%s%s:%s/%s" % (protocol, auth, hostname, port, path) | ||
616 | 40 | 47 | ||
617 | 41 | def _get_db(name, create=True, uri=None): | 48 | def _get_db(name, create=True, uri=None): |
618 | 42 | """Get (and create?) a database.""" | 49 | """Get (and create?) a database.""" |
619 | @@ -115,6 +122,7 @@ | |||
620 | 115 | 122 | ||
621 | 116 | excluded = set() | 123 | excluded = set() |
622 | 117 | excluded.add("management") | 124 | excluded.add("management") |
623 | 125 | excluded.add("users") | ||
624 | 118 | excluded_msets = _get_management_data(PAIRED_SERVER_RECORD_TYPE, | 126 | excluded_msets = _get_management_data(PAIRED_SERVER_RECORD_TYPE, |
625 | 119 | "excluded_names", uri=uri) | 127 | "excluded_names", uri=uri) |
626 | 120 | for excluded_mset in excluded_msets: | 128 | for excluded_mset in excluded_msets: |
627 | @@ -158,6 +166,8 @@ | |||
628 | 158 | v = dict() | 166 | v = dict() |
629 | 159 | v["record_id"] = row.id | 167 | v["record_id"] = row.id |
630 | 160 | v["active"] = True | 168 | v["active"] = True |
631 | 169 | if "oauth" in row.value: | ||
632 | 170 | v["oauth"] = row.value["oauth"] | ||
633 | 161 | if "unpaired" in row.value: | 171 | if "unpaired" in row.value: |
634 | 162 | v["active"] = not row.value["unpaired"] | 172 | v["active"] = not row.value["unpaired"] |
635 | 163 | hostid = row.value["pairing_identifier"] | 173 | hostid = row.value["pairing_identifier"] |
636 | @@ -193,15 +203,27 @@ | |||
637 | 193 | target_oauth=None): | 203 | target_oauth=None): |
638 | 194 | """This replication is instant and blocking, and does not persist. """ | 204 | """This replication is instant and blocking, and does not persist. """ |
639 | 195 | 205 | ||
640 | 206 | try: | ||
641 | 207 | if target_host: | ||
642 | 208 | # Target databases must exist before replicating to them. | ||
643 | 209 | logging.debug("creating %r %s:%d %s", target_database, target_host, | ||
644 | 210 | target_port, target_oauth) | ||
645 | 211 | create_database(target_host, target_port, target_database, | ||
646 | 212 | target_ssl, target_oauth) | ||
647 | 213 | logging.debug("db exists, and we're ready to replicate") | ||
648 | 214 | except: | ||
649 | 215 | logging.exception("can't create/verify %r %s:%d oauth=%s", | ||
650 | 216 | target_database, target_host, target_port, target_oauth) | ||
651 | 217 | |||
652 | 196 | if source_host: | 218 | if source_host: |
654 | 197 | source = mkuri(source_host, source_port, source_ssl, source_database) | 219 | source = mkuri(source_host, source_port, source_ssl, urllib.quote(source_database, safe="")) |
655 | 198 | else: | 220 | else: |
657 | 199 | source = source_database | 221 | source = urllib.quote(source_database, safe="") |
658 | 200 | 222 | ||
659 | 201 | if target_host: | 223 | if target_host: |
661 | 202 | target = mkuri(target_host, target_port, target_ssl, target_database) | 224 | target = mkuri(target_host, target_port, target_ssl, urllib.quote(target_database, safe="")) |
662 | 203 | else: | 225 | else: |
664 | 204 | target = target_database | 226 | target = urllib.quote(target_database, safe="") |
665 | 205 | 227 | ||
666 | 206 | if source_oauth: | 228 | if source_oauth: |
667 | 207 | assert "consumer_secret" in source_oauth | 229 | assert "consumer_secret" in source_oauth |
668 | @@ -212,35 +234,24 @@ | |||
669 | 212 | target = dict(url=target, auth=dict(oauth=target_oauth)) | 234 | target = dict(url=target, auth=dict(oauth=target_oauth)) |
670 | 213 | 235 | ||
671 | 214 | record = dict(source=source, target=target) | 236 | record = dict(source=source, target=target) |
685 | 215 | try: | 237 | |
673 | 216 | |||
674 | 217 | if target_host: | ||
675 | 218 | # Target databases must exist before replicating to them. | ||
676 | 219 | logging.debug("creating %r %s:%d", target_database, target_host, | ||
677 | 220 | target_port) | ||
678 | 221 | create_database(target_host, target_port, target_database, | ||
679 | 222 | target_ssl, target_oauth) | ||
680 | 223 | except: | ||
681 | 224 | logging.exception("can't talk to couchdb. %r %s:%d oauth=%s", | ||
682 | 225 | target_database, target_host, target_port, target_oauth) | ||
683 | 226 | |||
684 | 227 | logging.debug("db exists, and we're ready to replicate") | ||
686 | 228 | try: | 238 | try: |
687 | 229 | # regardless of source and target, we talk to our local couchdb :( | 239 | # regardless of source and target, we talk to our local couchdb :( |
688 | 230 | port = int(desktopcouch_find_port()) | 240 | port = int(desktopcouch_find_port()) |
689 | 231 | url = mkuri("localhost", port,) | 241 | url = mkuri("localhost", port,) |
690 | 232 | 242 | ||
692 | 233 | logging.debug("asking %r to send %s to %s", url, source, target) | 243 | logging.debug("asking %r to replicate %s to %s, using record %s", url, source, target, record) |
693 | 234 | 244 | ||
694 | 235 | ### All until python-couchdb gets a Server.replicate() function | 245 | ### All until python-couchdb gets a Server.replicate() function |
695 | 236 | local_server = server.OAuthCapableServer(url) | 246 | local_server = server.OAuthCapableServer(url) |
697 | 237 | resp, data = local_server.resource.post(path='/_replicate', content=record) | 247 | resp, data = local_server.resource.post(path='/_replicate', |
698 | 248 | content=record) | ||
699 | 238 | 249 | ||
700 | 239 | logging.debug("replicate result: %r %r", resp, data) | 250 | logging.debug("replicate result: %r %r", resp, data) |
701 | 240 | ### | 251 | ### |
702 | 241 | except: | 252 | except: |
705 | 242 | logging.error("can't talk to couchdb. %r <== %r", url, record) | 253 | logging.exception("can't replicate %r %r <== %r", source_database, |
706 | 243 | raise | 254 | url, record) |
707 | 244 | 255 | ||
708 | 245 | def get_pairings(uri=None): | 256 | def get_pairings(uri=None): |
709 | 246 | """Get a list of paired servers.""" | 257 | """Get a list of paired servers.""" |
710 | 247 | 258 | ||
711 | === modified file 'desktopcouch/pair/couchdb_pairing/dbus_io.py' | |||
712 | --- desktopcouch/pair/couchdb_pairing/dbus_io.py 2009-09-23 14:22:38 +0000 | |||
713 | +++ desktopcouch/pair/couchdb_pairing/dbus_io.py 2009-10-12 14:29:10 +0000 | |||
714 | @@ -103,18 +103,14 @@ | |||
715 | 103 | class LocationAdvertisement(Advertisement): | 103 | class LocationAdvertisement(Advertisement): |
716 | 104 | """An advertised couchdb location. See Advertisement class.""" | 104 | """An advertised couchdb location. See Advertisement class.""" |
717 | 105 | def __init__(self, *args, **kwargs): | 105 | def __init__(self, *args, **kwargs): |
722 | 106 | if "stype" in kwargs: | 106 | kwargs["stype"] = location_discovery_service_type |
723 | 107 | kwargs.pop(stype) | 107 | super(LocationAdvertisement, self).__init__(*args, **kwargs) |
720 | 108 | super(LocationAdvertisement, self).__init__( | ||
721 | 109 | stype=location_discovery_service_type, *args, **kwargs) | ||
724 | 110 | 108 | ||
725 | 111 | class PairAdvertisement(Advertisement): | 109 | class PairAdvertisement(Advertisement): |
726 | 112 | """An advertised couchdb pairing opportunity. See Advertisement class.""" | 110 | """An advertised couchdb pairing opportunity. See Advertisement class.""" |
727 | 113 | def __init__(self, *args, **kwargs): | 111 | def __init__(self, *args, **kwargs): |
732 | 114 | if "stype" in kwargs: | 112 | kwargs["stype"] = invitations_discovery_service_type |
733 | 115 | kwargs.pop(stype) | 113 | super(PairAdvertisement, self).__init__(*args, **kwargs) |
730 | 116 | super(PairAdvertisement, self).__init__( | ||
731 | 117 | stype=invitations_discovery_service_type, *args, **kwargs) | ||
734 | 118 | 114 | ||
735 | 119 | def avahitext_to_dict(avahitext): | 115 | def avahitext_to_dict(avahitext): |
736 | 120 | text = {} | 116 | text = {} |
737 | @@ -141,7 +137,13 @@ | |||
738 | 141 | def get_seen_paired_hosts(): | 137 | def get_seen_paired_hosts(): |
739 | 142 | pairing_encyclopedia = couchdb_io.get_all_known_pairings() | 138 | pairing_encyclopedia = couchdb_io.get_all_known_pairings() |
740 | 143 | return ( | 139 | return ( |
742 | 144 | (uuid, addr, port, pairing_encyclopedia[uuid]["active"]) | 140 | ( |
743 | 141 | uuid, | ||
744 | 142 | addr, | ||
745 | 143 | port, | ||
746 | 144 | not pairing_encyclopedia[uuid]["active"], | ||
747 | 145 | pairing_encyclopedia[uuid]["oauth"], | ||
748 | 146 | ) | ||
749 | 145 | for uuid, (addr, port) | 147 | for uuid, (addr, port) |
750 | 146 | in nearby_desktop_couch_instances.items() | 148 | in nearby_desktop_couch_instances.items() |
751 | 147 | if uuid in pairing_encyclopedia) | 149 | if uuid in pairing_encyclopedia) |
752 | @@ -149,51 +151,39 @@ | |||
753 | 149 | def maintain_discovered_servers(add_cb=cb_found_desktopcouch_server, | 151 | def maintain_discovered_servers(add_cb=cb_found_desktopcouch_server, |
754 | 150 | del_cb=cb_lost_desktopcouch_server): | 152 | del_cb=cb_lost_desktopcouch_server): |
755 | 151 | 153 | ||
757 | 152 | def remove_item_handler(interface, protocol, name, stype, domain, flags): | 154 | def remove_item_handler(cb, interface, protocol, name, stype, domain, |
758 | 155 | flags): | ||
759 | 153 | """A service disappeared.""" | 156 | """A service disappeared.""" |
760 | 154 | 157 | ||
782 | 155 | def handle_error(*args): | 158 | if name.startswith("desktopcouch "): |
783 | 156 | """An error in resolving a new service.""" | 159 | hostid = name[13:] |
784 | 157 | logging.error("zeroconf ItemNew error for services, %s", args) | 160 | logging.debug("lost sight of %r", hostid) |
785 | 158 | 161 | cb(hostid) | |
786 | 159 | def handle_resolved(*args): | 162 | else: |
787 | 160 | """Successfully resolved a new service, which we decode and send | 163 | logging.error("annc doesn't look like one of ours. %r", name) |
788 | 161 | back to our calling environment with the callback function.""" | 164 | |
789 | 162 | 165 | def new_item_handler(cb, interface, protocol, name, stype, domain, flags): | |
769 | 163 | name, host, port = args[2], args[5], args[8] | ||
770 | 164 | if name.startswith("desktopcouch "): | ||
771 | 165 | hostid = name[13:] | ||
772 | 166 | logging.debug("lost sight of %r", hostid) | ||
773 | 167 | del_cb(hostid) | ||
774 | 168 | else: | ||
775 | 169 | logging.error("no UUID in zeroconf message, %r", args) | ||
776 | 170 | |||
777 | 171 | server.ResolveService(interface, protocol, name, stype, | ||
778 | 172 | domain, avahi.PROTO_UNSPEC, dbus.UInt32(0), | ||
779 | 173 | reply_handler=handle_resolved, error_handler=handle_error) | ||
780 | 174 | |||
781 | 175 | def new_item_handler(interface, protocol, name, stype, domain, flags): | ||
790 | 176 | """A service appeared.""" | 166 | """A service appeared.""" |
791 | 177 | 167 | ||
792 | 178 | def handle_error(*args): | 168 | def handle_error(*args): |
793 | 179 | """An error in resolving a new service.""" | 169 | """An error in resolving a new service.""" |
794 | 180 | logging.error("zeroconf ItemNew error for services, %s", args) | 170 | logging.error("zeroconf ItemNew error for services, %s", args) |
795 | 181 | 171 | ||
797 | 182 | def handle_resolved(*args): | 172 | def handle_resolved(cb, *args): |
798 | 183 | """Successfully resolved a new service, which we decode and send | 173 | """Successfully resolved a new service, which we decode and send |
799 | 184 | back to our calling environment with the callback function.""" | 174 | back to our calling environment with the callback function.""" |
800 | 185 | 175 | ||
801 | 186 | name, host, port = args[2], args[5], args[8] | 176 | name, host, port = args[2], args[5], args[8] |
802 | 187 | # FIXME strip off "desktopcouch " | ||
803 | 188 | if name.startswith("desktopcouch "): | 177 | if name.startswith("desktopcouch "): |
805 | 189 | add_cb(name[13:], host, port) | 178 | cb(name[13:], host, port) |
806 | 190 | else: | 179 | else: |
808 | 191 | logging.error("no UUID in zeroconf message, %r", name) | 180 | logging.error("annc doesn't look like one of ours. %r", name) |
809 | 192 | return True | 181 | return True |
810 | 193 | 182 | ||
811 | 194 | server.ResolveService(interface, protocol, name, stype, | 183 | server.ResolveService(interface, protocol, name, stype, |
812 | 195 | domain, avahi.PROTO_UNSPEC, dbus.UInt32(0), | 184 | domain, avahi.PROTO_UNSPEC, dbus.UInt32(0), |
814 | 196 | reply_handler=handle_resolved, error_handler=handle_error) | 185 | reply_handler=lambda *a: handle_resolved(cb, *a), |
815 | 186 | error_handler=handle_error) | ||
816 | 197 | 187 | ||
817 | 198 | bus, server = get_dbus_bus_server() | 188 | bus, server = get_dbus_bus_server() |
818 | 199 | domain_name = get_local_hostname()[1] | 189 | domain_name = get_local_hostname()[1] |
819 | @@ -203,8 +193,10 @@ | |||
820 | 203 | 193 | ||
821 | 204 | sbrowser = dbus.Interface(browser_name, | 194 | sbrowser = dbus.Interface(browser_name, |
822 | 205 | avahi.DBUS_INTERFACE_SERVICE_BROWSER) | 195 | avahi.DBUS_INTERFACE_SERVICE_BROWSER) |
825 | 206 | sbrowser.connect_to_signal("ItemNew", new_item_handler) | 196 | sbrowser.connect_to_signal("ItemNew", |
826 | 207 | sbrowser.connect_to_signal("ItemRemove", remove_item_handler) | 197 | lambda *a: new_item_handler(add_cb, *a)) |
827 | 198 | sbrowser.connect_to_signal("ItemRemove", | ||
828 | 199 | lambda *a: remove_item_handler(del_cb, *a)) | ||
829 | 208 | sbrowser.connect_to_signal("Failure", | 200 | sbrowser.connect_to_signal("Failure", |
830 | 209 | lambda *a: logging.error("avahi error %r", a)) | 201 | lambda *a: logging.error("avahi error %r", a)) |
831 | 210 | 202 | ||
832 | @@ -214,27 +206,26 @@ | |||
833 | 214 | """Start looking for services. Use two callbacks to handle seeing | 206 | """Start looking for services. Use two callbacks to handle seeing |
834 | 215 | new services and seeing services disappear.""" | 207 | new services and seeing services disappear.""" |
835 | 216 | 208 | ||
837 | 217 | def remove_item_handler(interface, protocol, name, stype, domain, flags): | 209 | def remove_item_handler(cb, interface, protocol, name, stype, domain, flags): |
838 | 218 | """A service disappeared.""" | 210 | """A service disappeared.""" |
839 | 219 | 211 | ||
840 | 220 | if not show_local and flags & avahi.LOOKUP_RESULT_LOCAL: | 212 | if not show_local and flags & avahi.LOOKUP_RESULT_LOCAL: |
841 | 221 | return | 213 | return |
846 | 222 | 214 | cb(name) | |
847 | 223 | del_commport_name_cb(name) | 215 | |
848 | 224 | 216 | def new_item_handler(cb, interface, protocol, name, stype, domain, flags): | |
845 | 225 | def new_item_handler(interface, protocol, name, stype, domain, flags): | ||
849 | 226 | """A service appeared.""" | 217 | """A service appeared.""" |
850 | 227 | 218 | ||
851 | 228 | def handle_error(*args): | 219 | def handle_error(*args): |
852 | 229 | """An error in resolving a new service.""" | 220 | """An error in resolving a new service.""" |
853 | 230 | logging.error("zeroconf ItemNew error for services, %s", args) | 221 | logging.error("zeroconf ItemNew error for services, %s", args) |
854 | 231 | 222 | ||
856 | 232 | def handle_resolved(*args): | 223 | def handle_resolved(cb, *args): |
857 | 233 | """Successfully resolved a new service, which we decode and send | 224 | """Successfully resolved a new service, which we decode and send |
858 | 234 | back to our calling environment with the callback function.""" | 225 | back to our calling environment with the callback function.""" |
859 | 235 | text = avahitext_to_dict(args[9]) | 226 | text = avahitext_to_dict(args[9]) |
860 | 236 | name, host, port = args[2], args[5], args[8] | 227 | name, host, port = args[2], args[5], args[8] |
862 | 237 | add_commport_name_cb(name, text.get("description", "?"), | 228 | cb(name, text.get("description", "?"), |
863 | 238 | host, port, text.get("version", None)) | 229 | host, port, text.get("version", None)) |
864 | 239 | 230 | ||
865 | 240 | if not show_local and flags & avahi.LOOKUP_RESULT_LOCAL: | 231 | if not show_local and flags & avahi.LOOKUP_RESULT_LOCAL: |
866 | @@ -242,8 +233,8 @@ | |||
867 | 242 | 233 | ||
868 | 243 | server.ResolveService(interface, protocol, name, stype, | 234 | server.ResolveService(interface, protocol, name, stype, |
869 | 244 | domain, avahi.PROTO_UNSPEC, dbus.UInt32(0), | 235 | domain, avahi.PROTO_UNSPEC, dbus.UInt32(0), |
872 | 245 | reply_handler=handle_resolved, error_handler=handle_error) | 236 | reply_handler=lambda *a: handle_resolved(cb, *a), |
873 | 246 | 237 | error_handler=handle_error) | |
874 | 247 | 238 | ||
875 | 248 | bus, server = get_dbus_bus_server() | 239 | bus, server = get_dbus_bus_server() |
876 | 249 | domain_name = get_local_hostname()[1] | 240 | domain_name = get_local_hostname()[1] |
877 | @@ -254,7 +245,9 @@ | |||
878 | 254 | 245 | ||
879 | 255 | sbrowser = dbus.Interface(browser_name, | 246 | sbrowser = dbus.Interface(browser_name, |
880 | 256 | avahi.DBUS_INTERFACE_SERVICE_BROWSER) | 247 | avahi.DBUS_INTERFACE_SERVICE_BROWSER) |
883 | 257 | sbrowser.connect_to_signal("ItemNew", new_item_handler) | 248 | sbrowser.connect_to_signal("ItemNew", |
884 | 258 | sbrowser.connect_to_signal("ItemRemove", remove_item_handler) | 249 | lambda *a: new_item_handler(add_commport_name_cb, *a)) |
885 | 250 | sbrowser.connect_to_signal("ItemRemove", | ||
886 | 251 | lambda *a: remove_item_handler(del_commport_name_cb, *a)) | ||
887 | 259 | sbrowser.connect_to_signal("Failure", | 252 | sbrowser.connect_to_signal("Failure", |
888 | 260 | lambda *a: logging.error("avahi error %r", a)) | 253 | lambda *a: logging.error("avahi error %r", a)) |
889 | 261 | 254 | ||
890 | === added file 'desktopcouch/pair/tests/test_couchdb_io.py' | |||
891 | --- desktopcouch/pair/tests/test_couchdb_io.py 1970-01-01 00:00:00 +0000 | |||
892 | +++ desktopcouch/pair/tests/test_couchdb_io.py 2009-10-12 14:29:10 +0000 | |||
893 | @@ -0,0 +1,140 @@ | |||
894 | 1 | # Copyright 2009 Canonical Ltd. | ||
895 | 2 | # | ||
896 | 3 | # This file is part of desktopcouch. | ||
897 | 4 | # | ||
898 | 5 | # desktopcouch is free software: you can redistribute it and/or modify | ||
899 | 6 | # it under the terms of the GNU Lesser General Public License version 3 | ||
900 | 7 | # as published by the Free Software Foundation. | ||
901 | 8 | # | ||
902 | 9 | # desktopcouch is distributed in the hope that it will be useful, | ||
903 | 10 | # but WITHOUT ANY WARRANTY; without even the implied warranty of | ||
904 | 11 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the | ||
905 | 12 | # GNU Lesser General Public License for more details. | ||
906 | 13 | # | ||
907 | 14 | # You should have received a copy of the GNU Lesser General Public License | ||
908 | 15 | # along with desktopcouch. If not, see <http://www.gnu.org/licenses/>. | ||
909 | 16 | |||
910 | 17 | |||
911 | 18 | import pygtk | ||
912 | 19 | pygtk.require('2.0') | ||
913 | 20 | |||
914 | 21 | import desktopcouch.tests as dctests | ||
915 | 22 | |||
916 | 23 | from desktopcouch.pair.couchdb_pairing import couchdb_io | ||
917 | 24 | from desktopcouch.records.server import CouchDatabase | ||
918 | 25 | from desktopcouch.records.record import Record | ||
919 | 26 | import unittest | ||
920 | 27 | import uuid | ||
921 | 28 | import os | ||
922 | 29 | import httplib2 | ||
923 | 30 | URI = None # use autodiscovery that desktopcouch.tests permits. | ||
924 | 31 | |||
925 | 32 | class TestCouchdbIo(unittest.TestCase): | ||
926 | 33 | |||
927 | 34 | def setUp(self): | ||
928 | 35 | """setup each test""" | ||
929 | 36 | self.mgt_database = CouchDatabase('management', create=True, uri=URI) | ||
930 | 37 | self.foo_database = CouchDatabase('foo', create=True, uri=URI) | ||
931 | 38 | #create some records to pull out and test | ||
932 | 39 | self.foo_database.put_record(Record({ | ||
933 | 40 | "key1_1": "val1_1", "key1_2": "val1_2", "key1_3": "val1_3", | ||
934 | 41 | "record_type": "test.com"})) | ||
935 | 42 | self.foo_database.put_record(Record({ | ||
936 | 43 | "key2_1": "val2_1", "key2_2": "val2_2", "key2_3": "val2_3", | ||
937 | 44 | "record_type": "test.com"})) | ||
938 | 45 | self.foo_database.put_record(Record({ | ||
939 | 46 | "key13_1": "va31_1", "key3_2": "val3_2", "key3_3": "val3_3", | ||
940 | 47 | "record_type": "test.com"})) | ||
941 | 48 | |||
942 | 49 | def tearDown(self): | ||
943 | 50 | """tear down each test""" | ||
944 | 51 | del self.mgt_database._server['management'] | ||
945 | 52 | del self.mgt_database._server['foo'] | ||
946 | 53 | |||
947 | 54 | def test_put_static_paired_service(self): | ||
948 | 55 | service_name = "dummyfortest" | ||
949 | 56 | oauth_data = { | ||
950 | 57 | "consumer_key": str("abcdef"), | ||
951 | 58 | "consumer_secret": str("ghighjklm"), | ||
952 | 59 | "token": str("opqrst"), | ||
953 | 60 | "token_secret": str("uvwxyz"), | ||
954 | 61 | } | ||
955 | 62 | couchdb_io.put_static_paired_service(oauth_data, service_name, uri=URI) | ||
956 | 63 | pairings = list(couchdb_io.get_pairings()) | ||
957 | 64 | |||
958 | 65 | def test_put_dynamic_paired_host(self): | ||
959 | 66 | hostname = "host%d" % (os.getpid(),) | ||
960 | 67 | remote_uuid = str(uuid.uuid4()) | ||
961 | 68 | oauth_data = { | ||
962 | 69 | "consumer_key": str("abcdef"), | ||
963 | 70 | "consumer_secret": str("ghighjklm"), | ||
964 | 71 | "token": str("opqrst"), | ||
965 | 72 | "token_secret": str("uvwxyz"), | ||
966 | 73 | } | ||
967 | 74 | |||
968 | 75 | couchdb_io.put_dynamic_paired_host(hostname, remote_uuid, oauth_data, | ||
969 | 76 | uri=URI) | ||
970 | 77 | couchdb_io.put_dynamic_paired_host(hostname, remote_uuid, oauth_data, | ||
971 | 78 | uri=URI) | ||
972 | 79 | couchdb_io.put_dynamic_paired_host(hostname, remote_uuid, oauth_data, | ||
973 | 80 | uri=URI) | ||
974 | 81 | |||
975 | 82 | pairings = list(couchdb_io.get_pairings()) | ||
976 | 83 | self.assertEqual(3, len(pairings)) | ||
977 | 84 | self.assertEqual(pairings[0].value["oauth"], oauth_data) | ||
978 | 85 | self.assertEqual(pairings[0].value["server"], hostname) | ||
979 | 86 | self.assertEqual(pairings[0].value["pairing_identifier"], remote_uuid) | ||
980 | 87 | |||
981 | 88 | for i, row in enumerate(pairings): | ||
982 | 89 | couchdb_io.remove_pairing(row.id, i == 1) | ||
983 | 90 | |||
984 | 91 | pairings = list(couchdb_io.get_pairings()) | ||
985 | 92 | self.assertEqual(0, len(pairings)) | ||
986 | 93 | |||
987 | 94 | |||
988 | 95 | def test_get_database_names_replicatable_bad_server(self): | ||
989 | 96 | # If this resolves, FIRE YOUR DNS PROVIDER. | ||
990 | 97 | |||
991 | 98 | try: | ||
992 | 99 | names = couchdb_io.get_database_names_replicatable( | ||
993 | 100 | uri='http://test.desktopcouch.example.com:9/') | ||
994 | 101 | self.assertEqual(set(), names) | ||
995 | 102 | except httplib2.ServerNotFoundError: | ||
996 | 103 | pass | ||
997 | 104 | |||
998 | 105 | def test_get_database_names_replicatable(self): | ||
999 | 106 | names = couchdb_io.get_database_names_replicatable(uri=URI) | ||
1000 | 107 | self.assertFalse('management' in names) | ||
1001 | 108 | self.assertTrue('foo' in names) | ||
1002 | 109 | |||
1003 | 110 | def test_get_my_host_unique_id(self): | ||
1004 | 111 | got = couchdb_io.get_my_host_unique_id(uri=URI) | ||
1005 | 112 | again = couchdb_io.get_my_host_unique_id(uri=URI) | ||
1006 | 113 | self.assertEquals(len(got), 1) | ||
1007 | 114 | self.assertEquals(got, again) | ||
1008 | 115 | |||
1009 | 116 | def test_mkuri(self): | ||
1010 | 117 | uri = couchdb_io.mkuri( | ||
1011 | 118 | 'fnord.org', 55241, has_ssl=True, path='a/b/c', | ||
1012 | 119 | auth_pair=('f o o', 'b=a=r')) | ||
1013 | 120 | self.assertEquals( | ||
1014 | 121 | 'https://f%20o%20o:b%3Da%3Dr@fnord.org:55241/a/b/c', uri) | ||
1015 | 122 | |||
1016 | 123 | def Xtest_replication_good(self): | ||
1017 | 124 | pass | ||
1018 | 125 | |||
1019 | 126 | def Xtest_replication_no_oauth_remote(self): | ||
1020 | 127 | pass | ||
1021 | 128 | |||
1022 | 129 | def Xtest_replication_bad_oauth_remote(self): | ||
1023 | 130 | pass | ||
1024 | 131 | |||
1025 | 132 | def Xtest_replication_no_oauth_local(self): | ||
1026 | 133 | pass | ||
1027 | 134 | |||
1028 | 135 | def Xtest_replication_bad_oauth_local(self): | ||
1029 | 136 | pass | ||
1030 | 137 | |||
1031 | 138 | |||
1032 | 139 | if __name__ == "__main__": | ||
1033 | 140 | unittest.main() | ||
1034 | 0 | 141 | ||
1035 | === removed file 'desktopcouch/pair/tests/test_couchdb_io.py' | |||
1036 | --- desktopcouch/pair/tests/test_couchdb_io.py 2009-09-28 12:06:08 +0000 | |||
1037 | +++ desktopcouch/pair/tests/test_couchdb_io.py 1970-01-01 00:00:00 +0000 | |||
1038 | @@ -1,133 +0,0 @@ | |||
1039 | 1 | # Copyright 2009 Canonical Ltd. | ||
1040 | 2 | # | ||
1041 | 3 | # This file is part of desktopcouch. | ||
1042 | 4 | # | ||
1043 | 5 | # desktopcouch is free software: you can redistribute it and/or modify | ||
1044 | 6 | # it under the terms of the GNU Lesser General Public License version 3 | ||
1045 | 7 | # as published by the Free Software Foundation. | ||
1046 | 8 | # | ||
1047 | 9 | # desktopcouch is distributed in the hope that it will be useful, | ||
1048 | 10 | # but WITHOUT ANY WARRANTY; without even the implied warranty of | ||
1049 | 11 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the | ||
1050 | 12 | # GNU Lesser General Public License for more details. | ||
1051 | 13 | # | ||
1052 | 14 | # You should have received a copy of the GNU Lesser General Public License | ||
1053 | 15 | # along with desktopcouch. If not, see <http://www.gnu.org/licenses/>. | ||
1054 | 16 | |||
1055 | 17 | |||
1056 | 18 | import pygtk | ||
1057 | 19 | pygtk.require('2.0') | ||
1058 | 20 | |||
1059 | 21 | import desktopcouch.tests as dctests | ||
1060 | 22 | |||
1061 | 23 | from desktopcouch.pair.couchdb_pairing import couchdb_io | ||
1062 | 24 | from desktopcouch.records.server import CouchDatabase | ||
1063 | 25 | from desktopcouch.records.record import Record | ||
1064 | 26 | import unittest | ||
1065 | 27 | import uuid | ||
1066 | 28 | import os | ||
1067 | 29 | import httplib2 | ||
1068 | 30 | URI = None # use autodiscovery that desktopcouch.tests permits. | ||
1069 | 31 | |||
1070 | 32 | class TestCouchdbIo(unittest.TestCase): | ||
1071 | 33 | |||
1072 | 34 | def setUp(self): | ||
1073 | 35 | """setup each test""" | ||
1074 | 36 | self.mgt_database = CouchDatabase('management', create=True, uri=URI) | ||
1075 | 37 | self.foo_database = CouchDatabase('foo', create=True, uri=URI) | ||
1076 | 38 | #create some records to pull out and test | ||
1077 | 39 | self.foo_database.put_record(Record({ | ||
1078 | 40 | "key1_1": "val1_1", "key1_2": "val1_2", "key1_3": "val1_3", | ||
1079 | 41 | "record_type": "test.com"})) | ||
1080 | 42 | self.foo_database.put_record(Record({ | ||
1081 | 43 | "key2_1": "val2_1", "key2_2": "val2_2", "key2_3": "val2_3", | ||
1082 | 44 | "record_type": "test.com"})) | ||
1083 | 45 | self.foo_database.put_record(Record({ | ||
1084 | 46 | "key13_1": "va31_1", "key3_2": "val3_2", "key3_3": "val3_3", | ||
1085 | 47 | "record_type": "test.com"})) | ||
1086 | 48 | |||
1087 | 49 | def tearDown(self): | ||
1088 | 50 | """tear down each test""" | ||
1089 | 51 | del self.mgt_database._server['management'] | ||
1090 | 52 | del self.mgt_database._server['foo'] | ||
1091 | 53 | |||
1092 | 54 | def test_put_static_paired_service(self): | ||
1093 | 55 | service_name = "dummyfortest" | ||
1094 | 56 | oauth_data = { | ||
1095 | 57 | "consumer_key": str("abcdef"), | ||
1096 | 58 | "consumer_secret": str("ghighjklm"), | ||
1097 | 59 | "token": str("opqrst"), | ||
1098 | 60 | "token_secret": str("uvwxyz"), | ||
1099 | 61 | } | ||
1100 | 62 | couchdb_io.put_static_paired_service(oauth_data, service_name, uri=URI) | ||
1101 | 63 | pairings = list(couchdb_io.get_pairings()) | ||
1102 | 64 | |||
1103 | 65 | def test_put_dynamic_paired_host(self): | ||
1104 | 66 | hostname = "host%d" % (os.getpid(),) | ||
1105 | 67 | remote_uuid = str(uuid.uuid4()) | ||
1106 | 68 | oauth_data = { | ||
1107 | 69 | "consumer_key": str("abcdef"), | ||
1108 | 70 | "consumer_secret": str("ghighjklm"), | ||
1109 | 71 | "token": str("opqrst"), | ||
1110 | 72 | "token_secret": str("uvwxyz"), | ||
1111 | 73 | } | ||
1112 | 74 | |||
1113 | 75 | couchdb_io.put_dynamic_paired_host(hostname, remote_uuid, oauth_data, | ||
1114 | 76 | uri=URI) | ||
1115 | 77 | couchdb_io.put_dynamic_paired_host(hostname, remote_uuid, oauth_data, | ||
1116 | 78 | uri=URI) | ||
1117 | 79 | couchdb_io.put_dynamic_paired_host(hostname, remote_uuid, oauth_data, | ||
1118 | 80 | uri=URI) | ||
1119 | 81 | |||
1120 | 82 | pairings = list(couchdb_io.get_pairings()) | ||
1121 | 83 | self.assertEqual(3, len(pairings)) | ||
1122 | 84 | self.assertEqual(pairings[0].value["oauth"], oauth_data) | ||
1123 | 85 | self.assertEqual(pairings[0].value["server"], hostname) | ||
1124 | 86 | self.assertEqual(pairings[0].value["pairing_identifier"], remote_uuid) | ||
1125 | 87 | |||
1126 | 88 | for i, row in enumerate(pairings): | ||
1127 | 89 | couchdb_io.remove_pairing(row.id, i == 1) | ||
1128 | 90 | |||
1129 | 91 | pairings = list(couchdb_io.get_pairings()) | ||
1130 | 92 | self.assertEqual(0, len(pairings)) | ||
1131 | 93 | |||
1132 | 94 | |||
1133 | 95 | def test_get_database_names_replicatable_bad_server(self): | ||
1134 | 96 | # If this resolves, FIRE YOUR DNS PROVIDER. | ||
1135 | 97 | |||
1136 | 98 | try: | ||
1137 | 99 | names = couchdb_io.get_database_names_replicatable( | ||
1138 | 100 | uri='http://test.desktopcouch.example.com:9/') | ||
1139 | 101 | self.assertEqual(set(), names) | ||
1140 | 102 | except httplib2.ServerNotFoundError: | ||
1141 | 103 | pass | ||
1142 | 104 | |||
1143 | 105 | def test_get_database_names_replicatable(self): | ||
1144 | 106 | names = couchdb_io.get_database_names_replicatable(uri=URI) | ||
1145 | 107 | self.assertFalse('management' in names) | ||
1146 | 108 | self.assertTrue('foo' in names) | ||
1147 | 109 | |||
1148 | 110 | def test_get_my_host_unique_id(self): | ||
1149 | 111 | got = couchdb_io.get_my_host_unique_id(uri=URI) | ||
1150 | 112 | again = couchdb_io.get_my_host_unique_id(uri=URI) | ||
1151 | 113 | self.assertEquals(len(got), 1) | ||
1152 | 114 | self.assertEquals(got, again) | ||
1153 | 115 | |||
1154 | 116 | def Xtest_replication_good(self): | ||
1155 | 117 | pass | ||
1156 | 118 | |||
1157 | 119 | def Xtest_replication_no_oauth_remote(self): | ||
1158 | 120 | pass | ||
1159 | 121 | |||
1160 | 122 | def Xtest_replication_bad_oauth_remote(self): | ||
1161 | 123 | pass | ||
1162 | 124 | |||
1163 | 125 | def Xtest_replication_no_oauth_local(self): | ||
1164 | 126 | pass | ||
1165 | 127 | |||
1166 | 128 | def Xtest_replication_bad_oauth_local(self): | ||
1167 | 129 | pass | ||
1168 | 130 | |||
1169 | 131 | |||
1170 | 132 | if __name__ == "__main__": | ||
1171 | 133 | unittest.main() | ||
1172 | 134 | 0 | ||
1173 | === modified file 'desktopcouch/records/couchgrid.py' | |||
1174 | --- desktopcouch/records/couchgrid.py 2009-08-27 15:32:11 +0000 | |||
1175 | +++ desktopcouch/records/couchgrid.py 2009-10-12 14:29:10 +0000 | |||
1176 | @@ -212,7 +212,7 @@ | |||
1177 | 212 | pass | 212 | pass |
1178 | 213 | 213 | ||
1179 | 214 | #set the last value as the document_id, and append | 214 | #set the last value as the document_id, and append |
1181 | 215 | row[-1] = r.key | 215 | row[-1] = r.value["_id"] |
1182 | 216 | self.list_store.append(row) | 216 | self.list_store.append(row) |
1183 | 217 | 217 | ||
1184 | 218 | #apply the model tot he Treeview | 218 | #apply the model tot he Treeview |
1185 | @@ -341,19 +341,6 @@ | |||
1186 | 341 | for r in rows: | 341 | for r in rows: |
1187 | 342 | selection.select_path(r) | 342 | selection.select_path(r) |
1188 | 343 | 343 | ||
1189 | 344 | @property | ||
1190 | 345 | def selected_records(self): | ||
1191 | 346 | """ selected_records - returns a list of Record objects | ||
1192 | 347 | for those selected in the CouchGrid. | ||
1193 | 348 | |||
1194 | 349 | This property is read only. | ||
1195 | 350 | |||
1196 | 351 | """ | ||
1197 | 352 | recs = [] #a list of records to return | ||
1198 | 353 | for id in self.selected_record_ids: | ||
1199 | 354 | #retrieve a record for each id | ||
1200 | 355 | recs.append(Record(record_id = id, record_type = self.record_type)) | ||
1201 | 356 | return recs | ||
1202 | 357 | 344 | ||
1203 | 358 | def __reset_model(self): | 345 | def __reset_model(self): |
1204 | 359 | """ __reset_model - internal funciton, do not call directly. | 346 | """ __reset_model - internal funciton, do not call directly. |
1205 | @@ -434,10 +421,6 @@ | |||
1206 | 434 | for r in cw.selected_record_ids: | 421 | for r in cw.selected_record_ids: |
1207 | 435 | disp += str(r) + "\n" | 422 | disp += str(r) + "\n" |
1208 | 436 | 423 | ||
1209 | 437 | disp += "\n\nRecords:\n" | ||
1210 | 438 | for r in cw.selected_records: | ||
1211 | 439 | disp += str(r) + "\n" | ||
1212 | 440 | |||
1213 | 441 | tv.get_buffer().set_text(disp) | 424 | tv.get_buffer().set_text(disp) |
1214 | 442 | 425 | ||
1215 | 443 | def __select_ids(widget, widgets): | 426 | def __select_ids(widget, widgets): |
1216 | 444 | 427 | ||
1217 | === added file 'desktopcouch/records/doc/field_registry.txt' | |||
1218 | --- desktopcouch/records/doc/field_registry.txt 1970-01-01 00:00:00 +0000 | |||
1219 | +++ desktopcouch/records/doc/field_registry.txt 2009-10-12 14:29:10 +0000 | |||
1220 | @@ -0,0 +1,213 @@ | |||
1221 | 1 | The Field Registry and Transformers | ||
1222 | 2 | |||
1223 | 3 | Creating a field registry and/or a custom Transformer object is an | ||
1224 | 4 | easy yet flexible way to map data structures between desktopcouch and | ||
1225 | 5 | existing applications. | ||
1226 | 6 | |||
1227 | 7 | >>> from desktopcouch.records.field_registry import ( | ||
1228 | 8 | ... SimpleFieldMapping, MergeableListFieldMapping, Transformer) | ||
1229 | 9 | >>> from desktopcouch.records.record import Record | ||
1230 | 10 | |||
1231 | 11 | Say we have a very simple audiofile record type that defines 'artist' | ||
1232 | 12 | and 'title' string fields. Now also say we have an application that | ||
1233 | 13 | wants to interact with records of this type called 'My Awesome Music | ||
1234 | 14 | Player' or MAMP. The developers of MAMP use a data structure that has | ||
1235 | 15 | the same fields, but uses slightly different names for them: | ||
1236 | 16 | 'songtitle' and 'songartist'. We can now define a mapping between the | ||
1237 | 17 | fields: | ||
1238 | 18 | |||
1239 | 19 | >>> my_registry = { | ||
1240 | 20 | ... 'songartist': SimpleFieldMapping('artist'), | ||
1241 | 21 | ... 'songtitle': SimpleFieldMapping('title') | ||
1242 | 22 | ... } | ||
1243 | 23 | |||
1244 | 24 | and instantiate a Transformer object: | ||
1245 | 25 | |||
1246 | 26 | >>> my_transformer = Transformer('My Awesome Music Player', my_registry) | ||
1247 | 27 | |||
1248 | 28 | If MAMP has the following song object (a plain dictionary): | ||
1249 | 29 | |||
1250 | 30 | >>> my_song = { | ||
1251 | 31 | ... 'songartist': 'Thomas Tantrum', | ||
1252 | 32 | ... 'songtitle': 'Shake It Shake It' | ||
1253 | 33 | ... } | ||
1254 | 34 | |||
1255 | 35 | We can have the transformer transform it into a desktopcouch record | ||
1256 | 36 | object: | ||
1257 | 37 | |||
1258 | 38 | >>> AUDIO_FILE_RECORD_TYPE = 'http://example.org/record_types/audio_file' | ||
1259 | 39 | >>> new_record = Record(record_type=AUDIO_FILE_RECORD_TYPE) | ||
1260 | 40 | >>> my_transformer.from_app(my_song, new_record) | ||
1261 | 41 | |||
1262 | 42 | Now we can look at the underlying data: | ||
1263 | 43 | |||
1264 | 44 | >>> new_record._data #doctest: +NORMALIZE_WHITESPACE | ||
1265 | 45 | {'record_type': 'http://example.org/record_types/audio_file', | ||
1266 | 46 | 'title': 'Shake It Shake It', | ||
1267 | 47 | 'artist': 'Thomas Tantrum'} | ||
1268 | 48 | |||
1269 | 49 | You might think that this doesn't really help all that much and that | ||
1270 | 50 | the code you would have had to write to do this yourself would not | ||
1271 | 51 | have been all that much bigger than using the Transformer and you'd be | ||
1272 | 52 | right, but this is not all the transformers do. Let's say the song in | ||
1273 | 53 | MAMP also has a field 'number_of_times_played_in_mamp': | ||
1274 | 54 | |||
1275 | 55 | >>> my_song = { | ||
1276 | 56 | ... 'songartist': 'Thomas Tantrum', | ||
1277 | 57 | ... 'songtitle': 'Shake It Shake It', | ||
1278 | 58 | ... 'number_of_times_played_in_mamp': 23 | ||
1279 | 59 | ... } | ||
1280 | 60 | |||
1281 | 61 | Obviously that is not a field defined by our record type, since it is | ||
1282 | 62 | exceedingly unlikely that any other application would be interested in | ||
1283 | 63 | this data. Let's see what happens if we run the transformation with | ||
1284 | 64 | this field present, but undefined in the field registry: | ||
1285 | 65 | |||
1286 | 66 | >>> new_record = Record(record_type=AUDIO_FILE_RECORD_TYPE) | ||
1287 | 67 | >>> my_transformer.from_app(my_song, new_record) | ||
1288 | 68 | |||
1289 | 69 | >>> new_record._data #doctest: +NORMALIZE_WHITESPACE | ||
1290 | 70 | {'record_type': 'http://example.org/record_types/audio_file', | ||
1291 | 71 | 'title': 'Shake It Shake It', | ||
1292 | 72 | 'application_annotations': {'My Awesome Music Player': {'application_fields': {'number_of_times_played_in_mamp': 23}}}, | ||
1293 | 73 | 'artist': 'Thomas Tantrum'} | ||
1294 | 74 | |||
1295 | 75 | The transformer, when it encountered a field it had no knowledge of, | ||
1296 | 76 | assumed it was specific to this application, and instead of ignoring | ||
1297 | 77 | it, stuffed it in the proper place in application_annotations. That's | ||
1298 | 78 | already quite useful. | ||
1299 | 79 | |||
1300 | 80 | Let's try something a little trickier and more contrived. Say MAMP | ||
1301 | 81 | annotates each song in some other interesting ways: let's say it | ||
1302 | 82 | allows three very specific tags on each song: | ||
1303 | 83 | |||
1304 | 84 | >>> my_song = { | ||
1305 | 85 | ... 'songartist': 'Thomas Tantrum', | ||
1306 | 86 | ... 'songtitle': 'Shake It Shake It', | ||
1307 | 87 | ... 'number_of_times_played_in_mamp': 23, | ||
1308 | 88 | ... 'tag_vocals': 'female vocals', | ||
1309 | 89 | ... 'tag_title': 'shaking', | ||
1310 | 90 | ... 'tag_subject': 'talking' | ||
1311 | 91 | ... } | ||
1312 | 92 | |||
1313 | 93 | Our record type is a little more enlightened, and allows any number of | ||
1314 | 94 | tags, in a field 'tags', where each tag has a field 'tag' and and a | ||
1315 | 95 | field 'description'. It would be nice if we could keep a mapping | ||
1316 | 96 | between the tags that MAMP cares about, and the ones in our | ||
1317 | 97 | record. We'll have to do just a little more work, but we can. We'll | ||
1318 | 98 | make a new field_registry, and instantiate a new transformer with it: | ||
1319 | 99 | |||
1320 | 100 | >>> my_registry = { | ||
1321 | 101 | ... 'songartist': SimpleFieldMapping('artist'), | ||
1322 | 102 | ... 'songtitle': SimpleFieldMapping('title'), | ||
1323 | 103 | ... 'tag_vocals': MergeableListFieldMapping( | ||
1324 | 104 | ... 'My Awesome Music Player', 'vocals_tag', 'tags', 'tag', | ||
1325 | 105 | ... default_values={'description': 'vocals'}), | ||
1326 | 106 | ... 'tag_title': MergeableListFieldMapping( | ||
1327 | 107 | ... 'My Awesome Music Player', 'title_tag', 'tags', 'tag', | ||
1328 | 108 | ... default_values={'description': 'title'}), | ||
1329 | 109 | ... 'tag_subject': MergeableListFieldMapping( | ||
1330 | 110 | ... 'My Awesome Music Player', 'subject_tag', 'tags', 'tag', | ||
1331 | 111 | ... default_values={'description': 'subject'}), | ||
1332 | 112 | ... } | ||
1333 | 113 | |||
1334 | 114 | >>> my_transformer = Transformer('My Awesome Music Player', my_registry) | ||
1335 | 115 | >>> new_record = Record(record_type=AUDIO_FILE_RECORD_TYPE) | ||
1336 | 116 | >>> my_transformer.from_app(my_song, new_record) | ||
1337 | 117 | |||
1338 | 118 | Since _data will now contain lots of uuids to keep references intact, | ||
1339 | 119 | it's less readable, and a less clear example, so I'll show you what | ||
1340 | 120 | using the higher level API results in: | ||
1341 | 121 | |||
1342 | 122 | >>> [tag['tag'] for tag in new_record['tags']] | ||
1343 | 123 | ['shaking', 'talking', 'female vocals'] | ||
1344 | 124 | >>> [tag['description'] for tag in new_record['tags']] | ||
1345 | 125 | ['title', 'subject', 'vocals'] | ||
1346 | 126 | |||
1347 | 127 | Let's say we append a tag: | ||
1348 | 128 | |||
1349 | 129 | >>> new_record['tags'].append({'tag': 'yeah yeah no'}) | ||
1350 | 130 | |||
1351 | 131 | and we do the same thing: | ||
1352 | 132 | |||
1353 | 133 | >>> [tag['tag'] for tag in new_record['tags']] | ||
1354 | 134 | ['shaking', 'talking', 'female vocals', 'yeah yeah no'] | ||
1355 | 135 | >>> [tag.get('description') for tag in new_record['tags']] | ||
1356 | 136 | ['title', 'subject', 'vocals', None] | ||
1357 | 137 | |||
1358 | 138 | and say we change the first tag: | ||
1359 | 139 | |||
1360 | 140 | >>> new_record['tags'][0]['tag'] = 'shaking it' | ||
1361 | 141 | |||
1362 | 142 | and now look at transforming in the other direction: | ||
1363 | 143 | |||
1364 | 144 | >>> new_song = {} | ||
1365 | 145 | >>> my_transformer.to_app(new_record, new_song) | ||
1366 | 146 | >>> new_song #doctest: +NORMALIZE_WHITESPACE | ||
1367 | 147 | {'tag_title': 'shaking it', | ||
1368 | 148 | 'tag_subject': 'talking', | ||
1369 | 149 | 'tag_vocals': 'female vocals', | ||
1370 | 150 | 'songtitle': 'Shake It Shake It', | ||
1371 | 151 | 'songartist': 'Thomas Tantrum', | ||
1372 | 152 | 'number_of_times_played_in_mamp': 23} | ||
1373 | 153 | |||
1374 | 154 | We see that we got the data that was in the original song, except with | ||
1375 | 155 | the tag_title value changed to 'shaking it', exactly as we'd expect'. | ||
1376 | 156 | |||
1377 | 157 | Many more things are possible by creating new Transformers and/or | ||
1378 | 158 | FieldMapping types. I'll give one last example. Let us say that our | ||
1379 | 159 | record_type defines a rating field that's a value between 0 and | ||
1380 | 160 | 100. Let's also say that MAMP stores a string with anywhere between | ||
1381 | 161 | zero and five stars. | ||
1382 | 162 | |||
1383 | 163 | >>> class StarIntMapping(SimpleFieldMapping): | ||
1384 | 164 | ... """Map a five star rating system to a score of 0 to 100 as | ||
1385 | 165 | ... losslessly as possible. | ||
1386 | 166 | ... """ | ||
1387 | 167 | ... | ||
1388 | 168 | ... def getValue(self, record): | ||
1389 | 169 | ... """Get the value for the registered field.""" | ||
1390 | 170 | ... score = record.get(self._fieldname) | ||
1391 | 171 | ... stars = score / 20 | ||
1392 | 172 | ... remainder = score % 20 | ||
1393 | 173 | ... if remainder >= 5: | ||
1394 | 174 | ... stars += 1 | ||
1395 | 175 | ... return "*" * stars | ||
1396 | 176 | ... | ||
1397 | 177 | ... def setValue(self, record, value): | ||
1398 | 178 | ... """Set the value for the registered field.""" | ||
1399 | 179 | ... if value is None: | ||
1400 | 180 | ... self.deleteValue(record) | ||
1401 | 181 | ... return | ||
1402 | 182 | ... star_score = len(value) * 20 | ||
1403 | 183 | ... score = record.get(self._fieldname) | ||
1404 | 184 | ... if score is None or abs(star_score - score) > 5: | ||
1405 | 185 | ... record[self._fieldname] = star_score | ||
1406 | 186 | ... # else we keep the original value, since it was close | ||
1407 | 187 | ... # enough and more precise | ||
1408 | 188 | |||
1409 | 189 | And we make a registry and a transformer: | ||
1410 | 190 | |||
1411 | 191 | >>> my_registry = { | ||
1412 | 192 | ... 'songartist': SimpleFieldMapping('artist'), | ||
1413 | 193 | ... 'songtitle': SimpleFieldMapping('title'), | ||
1414 | 194 | ... 'stars': StarIntMapping('score'), | ||
1415 | 195 | ... } | ||
1416 | 196 | >>> my_transformer = Transformer('My Awesome Music Player', my_registry) | ||
1417 | 197 | |||
1418 | 198 | Create a song with a rating: | ||
1419 | 199 | |||
1420 | 200 | >>> my_song = { | ||
1421 | 201 | ... 'songartist': 'Thomas Tantrum', | ||
1422 | 202 | ... 'songtitle': 'Shake It Shake It', | ||
1423 | 203 | ... 'stars': '*****', | ||
1424 | 204 | ... 'number_of_times_played_in_mamp': 23 | ||
1425 | 205 | ... } | ||
1426 | 206 | |||
1427 | 207 | >>> new_record = Record(record_type=AUDIO_FILE_RECORD_TYPE) | ||
1428 | 208 | >>> my_transformer.from_app(my_song, new_record) | ||
1429 | 209 | >>> new_record['score'] | ||
1430 | 210 | 100 | ||
1431 | 211 | |||
1432 | 212 | And, I don't know if you've ever heard the song in question, but that | ||
1433 | 213 | is in fact correct! ;) | ||
1434 | 0 | 214 | ||
1435 | === modified file 'desktopcouch/records/doc/records.txt' | |||
1436 | --- desktopcouch/records/doc/records.txt 2009-07-31 13:44:45 +0000 | |||
1437 | +++ desktopcouch/records/doc/records.txt 2009-10-12 14:29:10 +0000 | |||
1438 | @@ -3,15 +3,16 @@ | |||
1439 | 3 | >>> from desktopcouch.records.server import CouchDatabase | 3 | >>> from desktopcouch.records.server import CouchDatabase |
1440 | 4 | >>> from desktopcouch.records.record import Record | 4 | >>> from desktopcouch.records.record import Record |
1441 | 5 | 5 | ||
1443 | 6 | Create a database object. Your database needs to exist. If it doesn't, you | 6 | Create a database object. Your database needs to exist. If it doesn't, you |
1444 | 7 | can create it by passing create=True. | 7 | can create it by passing create=True. |
1445 | 8 | 8 | ||
1446 | 9 | >>> db = CouchDatabase('testing', create=True) | 9 | >>> db = CouchDatabase('testing', create=True) |
1447 | 10 | 10 | ||
1452 | 11 | Create a Record object. Records have a record type, which should be a URL. | 11 | Create a Record object. Records have a record type, which should be a |
1453 | 12 | The URL should point to a human-readable document which describes your | 12 | URL. The URL should point to a human-readable document which |
1454 | 13 | record type. (This is not checked, though.) You can pass in an initial set | 13 | describes your record type. (This is not checked, though.) You can |
1455 | 14 | of data. | 14 | pass in an initial set of data. |
1456 | 15 | |||
1457 | 15 | >>> r = Record({'a':'b'}, record_type='http://example.com/testrecord') | 16 | >>> r = Record({'a':'b'}, record_type='http://example.com/testrecord') |
1458 | 16 | 17 | ||
1459 | 17 | Records work like Python dicts. | 18 | Records work like Python dicts. |
1460 | @@ -32,6 +33,7 @@ | |||
1461 | 32 | There is no ad-hoc query functionality. | 33 | There is no ad-hoc query functionality. |
1462 | 33 | 34 | ||
1463 | 34 | For views, you should specify a design document for most all calls. | 35 | For views, you should specify a design document for most all calls. |
1464 | 36 | |||
1465 | 35 | >>> design_doc = "application" | 37 | >>> design_doc = "application" |
1466 | 36 | 38 | ||
1467 | 37 | To create a view: | 39 | To create a view: |
1468 | @@ -41,20 +43,24 @@ | |||
1469 | 41 | >>> db.add_view("blueberries", map_js, reduce_js, design_doc) | 43 | >>> db.add_view("blueberries", map_js, reduce_js, design_doc) |
1470 | 42 | 44 | ||
1471 | 43 | List views for a given design document: | 45 | List views for a given design document: |
1472 | 46 | |||
1473 | 44 | >>> db.list_views(design_doc) | 47 | >>> db.list_views(design_doc) |
1474 | 45 | ['blueberries'] | 48 | ['blueberries'] |
1475 | 46 | 49 | ||
1476 | 47 | Test that a view exists: | 50 | Test that a view exists: |
1477 | 51 | |||
1478 | 48 | >>> db.view_exists("blueberries", design_doc) | 52 | >>> db.view_exists("blueberries", design_doc) |
1479 | 49 | True | 53 | True |
1480 | 50 | 54 | ||
1483 | 51 | Execute a view. Results from execute_view() take list-like syntax to pick one | 55 | Execute a view. Results from execute_view() take list-like syntax to |
1484 | 52 | or more rows to retreive. Use index or slice notation. | 56 | pick one or more rows to retrieve. Use index or slice notation. |
1485 | 57 | |||
1486 | 53 | >>> result = db.execute_view("blueberries", design_doc) | 58 | >>> result = db.execute_view("blueberries", design_doc) |
1487 | 54 | >>> for row in result["idfoo"]: | 59 | >>> for row in result["idfoo"]: |
1488 | 55 | ... pass # all rows with id "idfoo". Unlike lists, may be more than one. | 60 | ... pass # all rows with id "idfoo". Unlike lists, may be more than one. |
1489 | 56 | 61 | ||
1490 | 57 | Finally, remove a view. It returns a dict containing the deleted view data. | 62 | Finally, remove a view. It returns a dict containing the deleted view data. |
1491 | 63 | |||
1492 | 58 | >>> db.delete_view("blueberries", design_doc) | 64 | >>> db.delete_view("blueberries", design_doc) |
1493 | 59 | {'map': 'function(doc) { emit(doc._id, null) }'} | 65 | {'map': 'function(doc) { emit(doc._id, null) }'} |
1494 | 60 | 66 | ||
1495 | 61 | 67 | ||
1496 | === modified file 'desktopcouch/records/server.py' | |||
1497 | --- desktopcouch/records/server.py 2009-09-28 12:06:08 +0000 | |||
1498 | +++ desktopcouch/records/server.py 2009-10-12 14:29:10 +0000 | |||
1499 | @@ -22,6 +22,7 @@ | |||
1500 | 22 | """The Desktop Couch Records API.""" | 22 | """The Desktop Couch Records API.""" |
1501 | 23 | 23 | ||
1502 | 24 | from couchdb import Server | 24 | from couchdb import Server |
1503 | 25 | from couchdb.client import Resource | ||
1504 | 25 | import desktopcouch | 26 | import desktopcouch |
1505 | 26 | from desktopcouch.records import server_base | 27 | from desktopcouch.records import server_base |
1506 | 27 | 28 | ||
1507 | @@ -37,7 +38,7 @@ | |||
1508 | 37 | oauth_tokens["consumer_key"], oauth_tokens["consumer_secret"], | 38 | oauth_tokens["consumer_key"], oauth_tokens["consumer_secret"], |
1509 | 38 | oauth_tokens["token"], oauth_tokens["token_secret"]) | 39 | oauth_tokens["token"], oauth_tokens["token_secret"]) |
1510 | 39 | http.add_oauth_tokens(consumer_key, consumer_secret, token, token_secret) | 40 | http.add_oauth_tokens(consumer_key, consumer_secret, token, token_secret) |
1512 | 40 | self.resource = server_base.Resource(http, uri) | 41 | self.resource = Resource(http, uri) |
1513 | 41 | 42 | ||
1514 | 42 | class CouchDatabase(server_base.CouchDatabaseBase): | 43 | class CouchDatabase(server_base.CouchDatabaseBase): |
1515 | 43 | """An small records specific abstraction over a couch db database.""" | 44 | """An small records specific abstraction over a couch db database.""" |
1516 | 44 | 45 | ||
1517 | === added file 'desktopcouch/records/server_base.py' | |||
1518 | --- desktopcouch/records/server_base.py 1970-01-01 00:00:00 +0000 | |||
1519 | +++ desktopcouch/records/server_base.py 2009-10-12 14:29:10 +0000 | |||
1520 | @@ -0,0 +1,335 @@ | |||
1521 | 1 | # Copyright 2009 Canonical Ltd. | ||
1522 | 2 | # | ||
1523 | 3 | # This file is part of desktopcouch. | ||
1524 | 4 | # | ||
1525 | 5 | # desktopcouch is free software: you can redistribute it and/or modify | ||
1526 | 6 | # it under the terms of the GNU Lesser General Public License version 3 | ||
1527 | 7 | # as published by the Free Software Foundation. | ||
1528 | 8 | # | ||
1529 | 9 | # desktopcouch is distributed in the hope that it will be useful, | ||
1530 | 10 | # but WITHOUT ANY WARRANTY; without even the implied warranty of | ||
1531 | 11 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the | ||
1532 | 12 | # GNU Lesser General Public License for more details. | ||
1533 | 13 | # | ||
1534 | 14 | # You should have received a copy of the GNU Lesser General Public License | ||
1535 | 15 | # along with desktopcouch. If not, see <http://www.gnu.org/licenses/>. | ||
1536 | 16 | # | ||
1537 | 17 | # Authors: Eric Casteleijn <eric.casteleijn@canonical.com> | ||
1538 | 18 | # Mark G. Saye <mark.saye@canonical.com> | ||
1539 | 19 | # Stuart Langridge <stuart.langridge@canonical.com> | ||
1540 | 20 | # Chad Miller <chad.miller@canonical.com> | ||
1541 | 21 | |||
1542 | 22 | """The Desktop Couch Records API.""" | ||
1543 | 23 | |||
1544 | 24 | from couchdb import Server | ||
1545 | 25 | from couchdb.client import ResourceNotFound, ResourceConflict | ||
1546 | 26 | from couchdb.design import ViewDefinition | ||
1547 | 27 | from record import Record | ||
1548 | 28 | import httplib2 | ||
1549 | 29 | from oauth import oauth | ||
1550 | 30 | import urlparse | ||
1551 | 31 | import cgi | ||
1552 | 32 | |||
1553 | 33 | #DEFAULT_DESIGN_DOCUMENT = "design" | ||
1554 | 34 | DEFAULT_DESIGN_DOCUMENT = None # each view in its own eponymous design doc. | ||
1555 | 35 | |||
1556 | 36 | |||
1557 | 37 | class NoSuchDatabase(Exception): | ||
1558 | 38 | "Exception for trying to use a non-existent database" | ||
1559 | 39 | |||
1560 | 40 | def __init__(self, dbname): | ||
1561 | 41 | self.database = dbname | ||
1562 | 42 | super(NoSuchDatabase, self).__init__() | ||
1563 | 43 | |||
1564 | 44 | def __str__(self): | ||
1565 | 45 | return ("Database %s does not exist on this server. (Create it by " | ||
1566 | 46 | "passing create=True)") % self.database | ||
1567 | 47 | |||
1568 | 48 | class OAuthAuthentication(httplib2.Authentication): | ||
1569 | 49 | """An httplib2.Authentication subclass for OAuth""" | ||
1570 | 50 | def __init__(self, oauth_data, host, request_uri, headers, response, | ||
1571 | 51 | content, http): | ||
1572 | 52 | self.oauth_data = oauth_data | ||
1573 | 53 | httplib2.Authentication.__init__(self, None, host, request_uri, | ||
1574 | 54 | headers, response, content, http) | ||
1575 | 55 | |||
1576 | 56 | def request(self, method, request_uri, headers, content): | ||
1577 | 57 | """Modify the request headers to add the appropriate | ||
1578 | 58 | Authorization header.""" | ||
1579 | 59 | consumer = oauth.OAuthConsumer(self.oauth_data['consumer_key'], | ||
1580 | 60 | self.oauth_data['consumer_secret']) | ||
1581 | 61 | access_token = oauth.OAuthToken(self.oauth_data['token'], | ||
1582 | 62 | self.oauth_data['token_secret']) | ||
1583 | 63 | scheme = "http" | ||
1584 | 64 | sig_method = oauth.OAuthSignatureMethod_HMAC_SHA1 | ||
1585 | 65 | if ":" in self.host: | ||
1586 | 66 | trash, port = self.host.split(":", 1) | ||
1587 | 67 | if port == "443": | ||
1588 | 68 | scheme = "https" | ||
1589 | 69 | sig_method = oauth.OAuthSignatureMethod_PLAINTEXT | ||
1590 | 70 | full_http_url = "%s://%s%s" % (scheme, self.host, request_uri) | ||
1591 | 71 | schema, netloc, path, params, query, fragment = \ | ||
1592 | 72 | urlparse.urlparse(full_http_url) | ||
1593 | 73 | querystr_as_dict = dict(cgi.parse_qsl(query)) | ||
1594 | 74 | req = oauth.OAuthRequest.from_consumer_and_token( | ||
1595 | 75 | consumer, | ||
1596 | 76 | access_token, | ||
1597 | 77 | http_method = method, | ||
1598 | 78 | http_url = full_http_url, | ||
1599 | 79 | parameters = querystr_as_dict | ||
1600 | 80 | ) | ||
1601 | 81 | req.sign_request(sig_method(), consumer, access_token) | ||
1602 | 82 | headers.update(httplib2._normalize_headers(req.to_header())) | ||
1603 | 83 | |||
1604 | 84 | class OAuthCapableHttp(httplib2.Http): | ||
1605 | 85 | """Subclass of httplib2.Http which specifically uses our OAuth | ||
1606 | 86 | Authentication subclass (because httplib2 doesn't know about it)""" | ||
1607 | 87 | def add_oauth_tokens(self, consumer_key, consumer_secret, | ||
1608 | 88 | token, token_secret): | ||
1609 | 89 | self.oauth_data = { | ||
1610 | 90 | "consumer_key": consumer_key, | ||
1611 | 91 | "consumer_secret": consumer_secret, | ||
1612 | 92 | "token": token, | ||
1613 | 93 | "token_secret": token_secret | ||
1614 | 94 | } | ||
1615 | 95 | |||
1616 | 96 | def _auth_from_challenge(self, host, request_uri, headers, response, | ||
1617 | 97 | content): | ||
1618 | 98 | """Since we know we're talking to desktopcouch, and we know that it | ||
1619 | 99 | requires OAuth, just return the OAuthAuthentication here rather | ||
1620 | 100 | than checking to see which supported auth method is required.""" | ||
1621 | 101 | yield OAuthAuthentication(self.oauth_data, host, request_uri, headers, | ||
1622 | 102 | response, content, self) | ||
1623 | 103 | |||
1624 | 104 | def row_is_deleted(row): | ||
1625 | 105 | """Test if a row is marked as deleted. Smart views 'maps' should not | ||
1626 | 106 | return rows that are marked as deleted, so this function is not often | ||
1627 | 107 | required.""" | ||
1628 | 108 | try: | ||
1629 | 109 | return row['application_annotations']['Ubuntu One']\ | ||
1630 | 110 | ['private_application_annotations']['deleted'] | ||
1631 | 111 | except KeyError: | ||
1632 | 112 | return False | ||
1633 | 113 | |||
1634 | 114 | |||
1635 | 115 | class CouchDatabaseBase(object): | ||
1636 | 116 | """An small records specific abstraction over a couch db database.""" | ||
1637 | 117 | |||
1638 | 118 | def __init__(self, database, uri, record_factory=None, create=False, | ||
1639 | 119 | server_class=Server, **server_class_extras): | ||
1640 | 120 | self.server_uri = uri | ||
1641 | 121 | self._server = server_class(self.server_uri, **server_class_extras) | ||
1642 | 122 | if database not in self._server: | ||
1643 | 123 | if create: | ||
1644 | 124 | self._server.create(database) | ||
1645 | 125 | else: | ||
1646 | 126 | raise NoSuchDatabase(database) | ||
1647 | 127 | self.db = self._server[database] | ||
1648 | 128 | self.record_factory = record_factory or Record | ||
1649 | 129 | |||
1650 | 130 | def _temporary_query(self, map_fun, reduce_fun=None, language='javascript', | ||
1651 | 131 | wrapper=None, **options): | ||
1652 | 132 | """Pass-through to CouchDB library. Deprecated.""" | ||
1653 | 133 | return self.db.query(map_fun, reduce_fun, language, | ||
1654 | 134 | wrapper, **options) | ||
1655 | 135 | |||
1656 | 136 | def get_record(self, record_id): | ||
1657 | 137 | """Get a record from back end storage.""" | ||
1658 | 138 | try: | ||
1659 | 139 | couch_record = self.db[record_id] | ||
1660 | 140 | except ResourceNotFound: | ||
1661 | 141 | return None | ||
1662 | 142 | data = {} | ||
1663 | 143 | if 'deleted' in couch_record.get('application_annotations', {}).get( | ||
1664 | 144 | 'Ubuntu One', {}).get('private_application_annotations', {}): | ||
1665 | 145 | return None | ||
1666 | 146 | data.update(couch_record) | ||
1667 | 147 | record = self.record_factory(data=data) | ||
1668 | 148 | record.record_id = record_id | ||
1669 | 149 | return record | ||
1670 | 150 | |||
1671 | 151 | def put_record(self, record): | ||
1672 | 152 | """Put a record in back end storage.""" | ||
1673 | 153 | record_id = record.record_id or record._data.get('_id', '') | ||
1674 | 154 | record_data = record._data | ||
1675 | 155 | if record_id: | ||
1676 | 156 | self.db[record_id] = record_data | ||
1677 | 157 | else: | ||
1678 | 158 | record_id = self._add_record(record_data) | ||
1679 | 159 | return record_id | ||
1680 | 160 | |||
1681 | 161 | def update_fields(self, record_id, fields): | ||
1682 | 162 | """Safely update a number of fields. 'fields' being a | ||
1683 | 163 | dictionary with fieldname: value for only the fields we want | ||
1684 | 164 | to change the value of. | ||
1685 | 165 | """ | ||
1686 | 166 | while True: | ||
1687 | 167 | record = self.db[record_id] | ||
1688 | 168 | record.update(fields) | ||
1689 | 169 | try: | ||
1690 | 170 | self.db[record_id] = record | ||
1691 | 171 | except ResourceConflict: | ||
1692 | 172 | continue | ||
1693 | 173 | break | ||
1694 | 174 | |||
1695 | 175 | def _add_record(self, data): | ||
1696 | 176 | """Add a new record to the storage backend.""" | ||
1697 | 177 | return self.db.create(data) | ||
1698 | 178 | |||
1699 | 179 | def delete_record(self, record_id): | ||
1700 | 180 | """Delete record with given id""" | ||
1701 | 181 | record = self.db[record_id] | ||
1702 | 182 | record.setdefault('application_annotations', {}).setdefault( | ||
1703 | 183 | 'Ubuntu One', {}).setdefault('private_application_annotations', {})[ | ||
1704 | 184 | 'deleted'] = True | ||
1705 | 185 | self.db[record_id] = record | ||
1706 | 186 | |||
1707 | 187 | def record_exists(self, record_id): | ||
1708 | 188 | """Check if record with given id exists.""" | ||
1709 | 189 | if record_id not in self.db: | ||
1710 | 190 | return False | ||
1711 | 191 | record = self.db[record_id] | ||
1712 | 192 | return 'deleted' not in record.get('application_annotations', {}).get( | ||
1713 | 193 | 'Ubuntu One', {}).get('private_application_annotations', {}) | ||
1714 | 194 | |||
1715 | 195 | def delete_view(self, view_name, design_doc=DEFAULT_DESIGN_DOCUMENT): | ||
1716 | 196 | """Remove a view, given its name. Raises a KeyError on a unknown | ||
1717 | 197 | view. Returns a dict of functions the deleted view defined.""" | ||
1718 | 198 | if design_doc is None: | ||
1719 | 199 | design_doc = view_name | ||
1720 | 200 | |||
1721 | 201 | doc_id = "_design/%(design_doc)s" % locals() | ||
1722 | 202 | |||
1723 | 203 | # No atomic updates. Only read & mutate & write. Le sigh. | ||
1724 | 204 | # First, get current contents. | ||
1725 | 205 | try: | ||
1726 | 206 | view_container = self.db[doc_id]["views"] | ||
1727 | 207 | except (KeyError, ResourceNotFound): | ||
1728 | 208 | raise KeyError | ||
1729 | 209 | |||
1730 | 210 | deleted_data = view_container.pop(view_name) # Remove target | ||
1731 | 211 | |||
1732 | 212 | if len(view_container) > 0: | ||
1733 | 213 | # Construct a new list of objects representing all views to have. | ||
1734 | 214 | views = [ | ||
1735 | 215 | ViewDefinition(design_doc, k, v.get("map"), v.get("reduce")) | ||
1736 | 216 | for k, v | ||
1737 | 217 | in view_container.iteritems() | ||
1738 | 218 | ] | ||
1739 | 219 | # Push back a new batch of view. Pray to Eris that this doesn't | ||
1740 | 220 | # clobber anything we want. | ||
1741 | 221 | |||
1742 | 222 | # sync_many does nothing if we pass an empty list. It even gets | ||
1743 | 223 | # its design-document from the ViewDefinition items, and if there | ||
1744 | 224 | # are no items, then it has no idea of a design document to | ||
1745 | 225 | # update. This is a serious flaw. Thus, the "else" to follow. | ||
1746 | 226 | ViewDefinition.sync_many(self.db, views, remove_missing=True) | ||
1747 | 227 | else: | ||
1748 | 228 | # There are no views left in this design document. | ||
1749 | 229 | |||
1750 | 230 | # Remove design document. This assumes there are only views in | ||
1751 | 231 | # design documents. :( | ||
1752 | 232 | del self.db[doc_id] | ||
1753 | 233 | |||
1754 | 234 | assert not self.view_exists(view_name, design_doc) | ||
1755 | 235 | |||
1756 | 236 | return deleted_data | ||
1757 | 237 | |||
1758 | 238 | def execute_view(self, view_name, design_doc=DEFAULT_DESIGN_DOCUMENT): | ||
1759 | 239 | """Execute view and return results.""" | ||
1760 | 240 | if design_doc is None: | ||
1761 | 241 | design_doc = view_name | ||
1762 | 242 | |||
1763 | 243 | view_id_fmt = "_design/%(design_doc)s/_view/%(view_name)s" | ||
1764 | 244 | return self.db.view(view_id_fmt % locals()) | ||
1765 | 245 | |||
1766 | 246 | def add_view(self, view_name, map_js, reduce_js, | ||
1767 | 247 | design_doc=DEFAULT_DESIGN_DOCUMENT): | ||
1768 | 248 | """Create a view, given a name and the two parts (map and reduce). | ||
1769 | 249 | Return the document id.""" | ||
1770 | 250 | if design_doc is None: | ||
1771 | 251 | design_doc = view_name | ||
1772 | 252 | |||
1773 | 253 | view = ViewDefinition(design_doc, view_name, map_js, reduce_js) | ||
1774 | 254 | view.sync(self.db) | ||
1775 | 255 | assert self.view_exists(view_name, design_doc) | ||
1776 | 256 | |||
1777 | 257 | def view_exists(self, view_name, design_doc=DEFAULT_DESIGN_DOCUMENT): | ||
1778 | 258 | """Does a view with a given name, in a optional design document | ||
1779 | 259 | exist?""" | ||
1780 | 260 | if design_doc is None: | ||
1781 | 261 | design_doc = view_name | ||
1782 | 262 | |||
1783 | 263 | doc_id = "_design/%(design_doc)s" % locals() | ||
1784 | 264 | |||
1785 | 265 | try: | ||
1786 | 266 | view_container = self.db[doc_id]["views"] | ||
1787 | 267 | return view_name in view_container | ||
1788 | 268 | except (KeyError, ResourceNotFound): | ||
1789 | 269 | return False | ||
1790 | 270 | |||
1791 | 271 | def list_views(self, design_doc): | ||
1792 | 272 | """Return a list of view names for a given design document. There is | ||
1793 | 273 | no error if the design document does not exist or if there are no views | ||
1794 | 274 | in it.""" | ||
1795 | 275 | doc_id = "_design/%(design_doc)s" % locals() | ||
1796 | 276 | try: | ||
1797 | 277 | return list(self.db[doc_id]["views"]) | ||
1798 | 278 | except (KeyError, ResourceNotFound): | ||
1799 | 279 | return [] | ||
1800 | 280 | |||
1801 | 281 | def get_records(self, record_type=None, create_view=False, | ||
1802 | 282 | design_doc=DEFAULT_DESIGN_DOCUMENT): | ||
1803 | 283 | """A convenience function to get records from a view named | ||
1804 | 284 | C{get_records_and_type}. We optionally create a view in the design | ||
1805 | 285 | document. C{create_view} may be True or False, and a special value, | ||
1806 | 286 | None, is analogous to O_EXCL|O_CREAT . | ||
1807 | 287 | |||
1808 | 288 | Set record_type to a string to retrieve records of only that | ||
1809 | 289 | specified type. Otherwise, usse the view to return *all* records. | ||
1810 | 290 | If there is no view to use or we insist on creating a new view | ||
1811 | 291 | and cannot, raise KeyError . | ||
1812 | 292 | |||
1813 | 293 | You can use index notation on the result to get rows with a | ||
1814 | 294 | particular record type. | ||
1815 | 295 | =>> results = get_records() | ||
1816 | 296 | =>> for foo_document in results["foo"]: | ||
1817 | 297 | ... print foo_document | ||
1818 | 298 | |||
1819 | 299 | Use slice notation to apply start-key and end-key options to the view. | ||
1820 | 300 | =>> results = get_records() | ||
1821 | 301 | =>> people = results[['Person']:['Person','ZZZZ']] | ||
1822 | 302 | """ | ||
1823 | 303 | view_name = "get_records_and_type" | ||
1824 | 304 | view_map_js = """ | ||
1825 | 305 | function(doc) { | ||
1826 | 306 | try { | ||
1827 | 307 | if (! doc['application_annotations']['Ubuntu One'] | ||
1828 | 308 | ['private_application_annotations']['deleted']) { | ||
1829 | 309 | emit(doc.record_type, doc); | ||
1830 | 310 | } | ||
1831 | 311 | } catch (e) { | ||
1832 | 312 | emit(doc.record_type, doc); | ||
1833 | 313 | } | ||
1834 | 314 | }""" | ||
1835 | 315 | |||
1836 | 316 | if design_doc is None: | ||
1837 | 317 | design_doc = view_name | ||
1838 | 318 | |||
1839 | 319 | exists = self.view_exists(view_name, design_doc) | ||
1840 | 320 | |||
1841 | 321 | if exists: | ||
1842 | 322 | if create_view is None: | ||
1843 | 323 | raise KeyError("Exclusive creation failed.") | ||
1844 | 324 | else: | ||
1845 | 325 | if create_view == False: | ||
1846 | 326 | raise KeyError("View doesn't already exist.") | ||
1847 | 327 | |||
1848 | 328 | if not exists: | ||
1849 | 329 | self.add_view(view_name, view_map_js, None, design_doc) | ||
1850 | 330 | |||
1851 | 331 | viewdata = self.execute_view(view_name, design_doc) | ||
1852 | 332 | if record_type is None: | ||
1853 | 333 | return viewdata | ||
1854 | 334 | else: | ||
1855 | 335 | return viewdata[record_type] | ||
1856 | 0 | 336 | ||
1857 | === removed file 'desktopcouch/records/server_base.py' | |||
1858 | --- desktopcouch/records/server_base.py 2009-09-28 12:06:08 +0000 | |||
1859 | +++ desktopcouch/records/server_base.py 1970-01-01 00:00:00 +0000 | |||
1860 | @@ -1,326 +0,0 @@ | |||
1861 | 1 | # Copyright 2009 Canonical Ltd. | ||
1862 | 2 | # | ||
1863 | 3 | # This file is part of desktopcouch. | ||
1864 | 4 | # | ||
1865 | 5 | # desktopcouch is free software: you can redistribute it and/or modify | ||
1866 | 6 | # it under the terms of the GNU Lesser General Public License version 3 | ||
1867 | 7 | # as published by the Free Software Foundation. | ||
1868 | 8 | # | ||
1869 | 9 | # desktopcouch is distributed in the hope that it will be useful, | ||
1870 | 10 | # but WITHOUT ANY WARRANTY; without even the implied warranty of | ||
1871 | 11 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the | ||
1872 | 12 | # GNU Lesser General Public License for more details. | ||
1873 | 13 | # | ||
1874 | 14 | # You should have received a copy of the GNU Lesser General Public License | ||
1875 | 15 | # along with desktopcouch. If not, see <http://www.gnu.org/licenses/>. | ||
1876 | 16 | # | ||
1877 | 17 | # Authors: Eric Casteleijn <eric.casteleijn@canonical.com> | ||
1878 | 18 | # Mark G. Saye <mark.saye@canonical.com> | ||
1879 | 19 | # Stuart Langridge <stuart.langridge@canonical.com> | ||
1880 | 20 | # Chad Miller <chad.miller@canonical.com> | ||
1881 | 21 | |||
1882 | 22 | """The Desktop Couch Records API.""" | ||
1883 | 23 | |||
1884 | 24 | from couchdb import Server | ||
1885 | 25 | from couchdb.client import ResourceNotFound, ResourceConflict, Resource | ||
1886 | 26 | from couchdb.design import ViewDefinition | ||
1887 | 27 | from record import Record | ||
1888 | 28 | import httplib2 | ||
1889 | 29 | from oauth import oauth | ||
1890 | 30 | import urlparse | ||
1891 | 31 | import cgi | ||
1892 | 32 | import logging | ||
1893 | 33 | |||
1894 | 34 | #DEFAULT_DESIGN_DOCUMENT = "design" | ||
1895 | 35 | DEFAULT_DESIGN_DOCUMENT = None # each view in its own eponymous design doc. | ||
1896 | 36 | |||
1897 | 37 | |||
1898 | 38 | class NoSuchDatabase(Exception): | ||
1899 | 39 | "Exception for trying to use a non-existent database" | ||
1900 | 40 | |||
1901 | 41 | def __init__(self, dbname): | ||
1902 | 42 | self.database = dbname | ||
1903 | 43 | super(NoSuchDatabase, self).__init__() | ||
1904 | 44 | |||
1905 | 45 | def __str__(self): | ||
1906 | 46 | return ("Database %s does not exist on this server. (Create it by " | ||
1907 | 47 | "passing create=True)") % self.database | ||
1908 | 48 | |||
1909 | 49 | class OAuthAuthentication(httplib2.Authentication): | ||
1910 | 50 | """An httplib2.Authentication subclass for OAuth""" | ||
1911 | 51 | def __init__(self, oauth_data, host, request_uri, headers, response, | ||
1912 | 52 | content, http): | ||
1913 | 53 | self.oauth_data = oauth_data | ||
1914 | 54 | httplib2.Authentication.__init__(self, None, host, request_uri, | ||
1915 | 55 | headers, response, content, http) | ||
1916 | 56 | |||
1917 | 57 | def request(self, method, request_uri, headers, content): | ||
1918 | 58 | """Modify the request headers to add the appropriate | ||
1919 | 59 | Authorization header.""" | ||
1920 | 60 | consumer = oauth.OAuthConsumer(self.oauth_data['consumer_key'], | ||
1921 | 61 | self.oauth_data['consumer_secret']) | ||
1922 | 62 | access_token = oauth.OAuthToken(self.oauth_data['token'], | ||
1923 | 63 | self.oauth_data['token_secret']) | ||
1924 | 64 | full_http_url = "http://%s%s" % (self.host, request_uri) | ||
1925 | 65 | schema, netloc, path, params, query, fragment = urlparse.urlparse(full_http_url) | ||
1926 | 66 | querystr_as_dict = dict(cgi.parse_qsl(query)) | ||
1927 | 67 | req = oauth.OAuthRequest.from_consumer_and_token( | ||
1928 | 68 | consumer, | ||
1929 | 69 | access_token, | ||
1930 | 70 | http_method = method, | ||
1931 | 71 | http_url = full_http_url, | ||
1932 | 72 | parameters = querystr_as_dict | ||
1933 | 73 | ) | ||
1934 | 74 | req.sign_request(oauth.OAuthSignatureMethod_HMAC_SHA1(), consumer, access_token) | ||
1935 | 75 | headers.update(httplib2._normalize_headers(req.to_header())) | ||
1936 | 76 | for header in headers.iteritems(): | ||
1937 | 77 | logging.debug("header %s", header) | ||
1938 | 78 | |||
1939 | 79 | class OAuthCapableHttp(httplib2.Http): | ||
1940 | 80 | """Subclass of httplib2.Http which specifically uses our OAuth | ||
1941 | 81 | Authentication subclass (because httplib2 doesn't know about it)""" | ||
1942 | 82 | def add_oauth_tokens(self, consumer_key, consumer_secret, | ||
1943 | 83 | token, token_secret): | ||
1944 | 84 | self.oauth_data = { | ||
1945 | 85 | "consumer_key": consumer_key, | ||
1946 | 86 | "consumer_secret": consumer_secret, | ||
1947 | 87 | "token": token, | ||
1948 | 88 | "token_secret": token_secret | ||
1949 | 89 | } | ||
1950 | 90 | |||
1951 | 91 | def _auth_from_challenge(self, host, request_uri, headers, response, content): | ||
1952 | 92 | """Since we know we're talking to desktopcouch, and we know that it | ||
1953 | 93 | requires OAuth, just return the OAuthAuthentication here rather | ||
1954 | 94 | than checking to see which supported auth method is required.""" | ||
1955 | 95 | yield OAuthAuthentication(self.oauth_data, host, request_uri, headers, | ||
1956 | 96 | response, content, self) | ||
1957 | 97 | |||
1958 | 98 | def row_is_deleted(row): | ||
1959 | 99 | """Test if a row is marked as deleted. Smart views 'maps' should not | ||
1960 | 100 | return rows that are marked as deleted, so this function is not often | ||
1961 | 101 | required.""" | ||
1962 | 102 | try: | ||
1963 | 103 | return row['application_annotations']['Ubuntu One']\ | ||
1964 | 104 | ['private_application_annotations']['deleted'] | ||
1965 | 105 | except KeyError: | ||
1966 | 106 | return False | ||
1967 | 107 | |||
1968 | 108 | |||
1969 | 109 | class CouchDatabaseBase(object): | ||
1970 | 110 | """An small records specific abstraction over a couch db database.""" | ||
1971 | 111 | |||
1972 | 112 | def __init__(self, database, uri, record_factory=None, create=False, | ||
1973 | 113 | server_class=Server, **server_class_extras): | ||
1974 | 114 | self.server_uri = uri | ||
1975 | 115 | self._server = server_class(self.server_uri, **server_class_extras) | ||
1976 | 116 | if database not in self._server: | ||
1977 | 117 | if create: | ||
1978 | 118 | self._server.create(database) | ||
1979 | 119 | else: | ||
1980 | 120 | raise NoSuchDatabase(database) | ||
1981 | 121 | self.db = self._server[database] | ||
1982 | 122 | self.record_factory = record_factory or Record | ||
1983 | 123 | |||
1984 | 124 | def _temporary_query(self, map_fun, reduce_fun=None, language='javascript', | ||
1985 | 125 | wrapper=None, **options): | ||
1986 | 126 | """Pass-through to CouchDB library. Deprecated.""" | ||
1987 | 127 | return self.db.query(map_fun, reduce_fun, language, | ||
1988 | 128 | wrapper, **options) | ||
1989 | 129 | |||
1990 | 130 | def get_record(self, record_id): | ||
1991 | 131 | """Get a record from back end storage.""" | ||
1992 | 132 | try: | ||
1993 | 133 | couch_record = self.db[record_id] | ||
1994 | 134 | except ResourceNotFound: | ||
1995 | 135 | return None | ||
1996 | 136 | data = {} | ||
1997 | 137 | data.update(couch_record) | ||
1998 | 138 | record = self.record_factory(data=data) | ||
1999 | 139 | record.record_id = record_id | ||
2000 | 140 | return record | ||
2001 | 141 | |||
2002 | 142 | def put_record(self, record): | ||
2003 | 143 | """Put a record in back end storage.""" | ||
2004 | 144 | record_id = record.record_id or record._data.get('_id', '') | ||
2005 | 145 | record_data = record._data | ||
2006 | 146 | if record_id: | ||
2007 | 147 | self.db[record_id] = record_data | ||
2008 | 148 | else: | ||
2009 | 149 | record_id = self._add_record(record_data) | ||
2010 | 150 | return record_id | ||
2011 | 151 | |||
2012 | 152 | def update_fields(self, doc_id, fields): | ||
2013 | 153 | """Safely update a number of fields. 'fields' being a | ||
2014 | 154 | dictionary with fieldname: value for only the fields we want | ||
2015 | 155 | to change the value of. | ||
2016 | 156 | """ | ||
2017 | 157 | while True: | ||
2018 | 158 | doc = self.db[doc_id] | ||
2019 | 159 | doc.update(fields) | ||
2020 | 160 | try: | ||
2021 | 161 | self.db[doc.id] = doc | ||
2022 | 162 | except ResourceConflict: | ||
2023 | 163 | continue | ||
2024 | 164 | break | ||
2025 | 165 | |||
2026 | 166 | def _add_record(self, data): | ||
2027 | 167 | """Add a new record to the storage backend.""" | ||
2028 | 168 | return self.db.create(data) | ||
2029 | 169 | |||
2030 | 170 | def delete_record(self, record_id): | ||
2031 | 171 | """Delete record with given id""" | ||
2032 | 172 | record = self.db[record_id] | ||
2033 | 173 | record.setdefault('application_annotations', {}).setdefault( | ||
2034 | 174 | 'Ubuntu One', {}).setdefault('private_application_annotations', {})[ | ||
2035 | 175 | 'deleted'] = True | ||
2036 | 176 | self.db[record_id] = record | ||
2037 | 177 | |||
2038 | 178 | def record_exists(self, record_id): | ||
2039 | 179 | """Check if record with given id exists.""" | ||
2040 | 180 | if record_id not in self.db: | ||
2041 | 181 | return False | ||
2042 | 182 | record = self.db[record_id] | ||
2043 | 183 | return 'deleted' not in record.get('application_annotations', {}).get( | ||
2044 | 184 | 'Ubuntu One', {}).get('private_application_annotations', {}) | ||
2045 | 185 | |||
2046 | 186 | def delete_view(self, view_name, design_doc=DEFAULT_DESIGN_DOCUMENT): | ||
2047 | 187 | """Remove a view, given its name. Raises a KeyError on a unknown | ||
2048 | 188 | view. Returns a dict of functions the deleted view defined.""" | ||
2049 | 189 | if design_doc is None: | ||
2050 | 190 | design_doc = view_name | ||
2051 | 191 | |||
2052 | 192 | doc_id = "_design/%(design_doc)s" % locals() | ||
2053 | 193 | |||
2054 | 194 | # No atomic updates. Only read & mutate & write. Le sigh. | ||
2055 | 195 | # First, get current contents. | ||
2056 | 196 | try: | ||
2057 | 197 | view_container = self.db[doc_id]["views"] | ||
2058 | 198 | except (KeyError, ResourceNotFound): | ||
2059 | 199 | raise KeyError | ||
2060 | 200 | |||
2061 | 201 | deleted_data = view_container.pop(view_name) # Remove target | ||
2062 | 202 | |||
2063 | 203 | if len(view_container) > 0: | ||
2064 | 204 | # Construct a new list of objects representing all views to have. | ||
2065 | 205 | views = [ | ||
2066 | 206 | ViewDefinition(design_doc, k, v.get("map"), v.get("reduce")) | ||
2067 | 207 | for k, v | ||
2068 | 208 | in view_container.iteritems() | ||
2069 | 209 | ] | ||
2070 | 210 | # Push back a new batch of view. Pray to Eris that this doesn't | ||
2071 | 211 | # clobber anything we want. | ||
2072 | 212 | |||
2073 | 213 | # sync_many does nothing if we pass an empty list. It even gets | ||
2074 | 214 | # its design-document from the ViewDefinition items, and if there | ||
2075 | 215 | # are no items, then it has no idea of a design document to | ||
2076 | 216 | # update. This is a serious flaw. Thus, the "else" to follow. | ||
2077 | 217 | ViewDefinition.sync_many(self.db, views, remove_missing=True) | ||
2078 | 218 | else: | ||
2079 | 219 | # There are no views left in this design document. | ||
2080 | 220 | |||
2081 | 221 | # Remove design document. This assumes there are only views in | ||
2082 | 222 | # design documents. :( | ||
2083 | 223 | del self.db[doc_id] | ||
2084 | 224 | |||
2085 | 225 | assert not self.view_exists(view_name, design_doc) | ||
2086 | 226 | |||
2087 | 227 | return deleted_data | ||
2088 | 228 | |||
2089 | 229 | def execute_view(self, view_name, design_doc=DEFAULT_DESIGN_DOCUMENT): | ||
2090 | 230 | """Execute view and return results.""" | ||
2091 | 231 | if design_doc is None: | ||
2092 | 232 | design_doc = view_name | ||
2093 | 233 | |||
2094 | 234 | view_id_fmt = "_design/%(design_doc)s/_view/%(view_name)s" | ||
2095 | 235 | return self.db.view(view_id_fmt % locals()) | ||
2096 | 236 | |||
2097 | 237 | def add_view(self, view_name, map_js, reduce_js, | ||
2098 | 238 | design_doc=DEFAULT_DESIGN_DOCUMENT): | ||
2099 | 239 | """Create a view, given a name and the two parts (map and reduce). | ||
2100 | 240 | Return the document id.""" | ||
2101 | 241 | if design_doc is None: | ||
2102 | 242 | design_doc = view_name | ||
2103 | 243 | |||
2104 | 244 | view = ViewDefinition(design_doc, view_name, map_js, reduce_js) | ||
2105 | 245 | view.sync(self.db) | ||
2106 | 246 | assert self.view_exists(view_name, design_doc) | ||
2107 | 247 | |||
2108 | 248 | def view_exists(self, view_name, design_doc=DEFAULT_DESIGN_DOCUMENT): | ||
2109 | 249 | """Does a view with a given name, in a optional design document | ||
2110 | 250 | exist?""" | ||
2111 | 251 | if design_doc is None: | ||
2112 | 252 | design_doc = view_name | ||
2113 | 253 | |||
2114 | 254 | doc_id = "_design/%(design_doc)s" % locals() | ||
2115 | 255 | |||
2116 | 256 | try: | ||
2117 | 257 | view_container = self.db[doc_id]["views"] | ||
2118 | 258 | return view_name in view_container | ||
2119 | 259 | except (KeyError, ResourceNotFound): | ||
2120 | 260 | return False | ||
2121 | 261 | |||
2122 | 262 | def list_views(self, design_doc): | ||
2123 | 263 | """Return a list of view names for a given design document. There is | ||
2124 | 264 | no error if the design document does not exist or if there are no views | ||
2125 | 265 | in it.""" | ||
2126 | 266 | doc_id = "_design/%(design_doc)s" % locals() | ||
2127 | 267 | try: | ||
2128 | 268 | return list(self.db[doc_id]["views"]) | ||
2129 | 269 | except (KeyError, ResourceNotFound): | ||
2130 | 270 | return [] | ||
2131 | 271 | |||
2132 | 272 | def get_records(self, record_type=None, create_view=False, | ||
2133 | 273 | design_doc=DEFAULT_DESIGN_DOCUMENT): | ||
2134 | 274 | """A convenience function to get records from a view named | ||
2135 | 275 | C{get_records_and_type}. We optionally create a view in the design | ||
2136 | 276 | document. C{create_view} may be True or False, and a special value, | ||
2137 | 277 | None, is analogous to O_EXCL|O_CREAT . | ||
2138 | 278 | |||
2139 | 279 | Set record_type to a string to retrieve records of only that | ||
2140 | 280 | specified type. Otherwise, usse the view to return *all* records. | ||
2141 | 281 | If there is no view to use or we insist on creating a new view | ||
2142 | 282 | and cannot, raise KeyError . | ||
2143 | 283 | |||
2144 | 284 | You can use index notation on the result to get rows with a | ||
2145 | 285 | particular record type. | ||
2146 | 286 | =>> results = get_records() | ||
2147 | 287 | =>> for foo_document in results["foo"]: | ||
2148 | 288 | ... print foo_document | ||
2149 | 289 | |||
2150 | 290 | Use slice notation to apply start-key and end-key options to the view. | ||
2151 | 291 | =>> results = get_records() | ||
2152 | 292 | =>> people = results[['Person']:['Person','ZZZZ']] | ||
2153 | 293 | """ | ||
2154 | 294 | view_name = "get_records_and_type" | ||
2155 | 295 | view_map_js = """ | ||
2156 | 296 | function(doc) { | ||
2157 | 297 | try { | ||
2158 | 298 | if (! doc['application_annotations']['Ubuntu One'] | ||
2159 | 299 | ['private_application_annotations']['deleted']) { | ||
2160 | 300 | emit(doc.record_type, doc); | ||
2161 | 301 | } | ||
2162 | 302 | } catch (e) { | ||
2163 | 303 | emit(doc.record_type, doc); | ||
2164 | 304 | } | ||
2165 | 305 | }""" | ||
2166 | 306 | |||
2167 | 307 | if design_doc is None: | ||
2168 | 308 | design_doc = view_name | ||
2169 | 309 | |||
2170 | 310 | exists = self.view_exists(view_name, design_doc) | ||
2171 | 311 | |||
2172 | 312 | if exists: | ||
2173 | 313 | if create_view is None: | ||
2174 | 314 | raise KeyError("Exclusive creation failed.") | ||
2175 | 315 | else: | ||
2176 | 316 | if create_view == False: | ||
2177 | 317 | raise KeyError("View doesn't already exist.") | ||
2178 | 318 | |||
2179 | 319 | if not exists: | ||
2180 | 320 | self.add_view(view_name, view_map_js, None, design_doc) | ||
2181 | 321 | |||
2182 | 322 | viewdata = self.execute_view(view_name, design_doc) | ||
2183 | 323 | if record_type is None: | ||
2184 | 324 | return viewdata | ||
2185 | 325 | else: | ||
2186 | 326 | return viewdata[record_type] | ||
2187 | 327 | 0 | ||
2188 | === modified file 'desktopcouch/records/tests/test_couchgrid.py' | |||
2189 | --- desktopcouch/records/tests/test_couchgrid.py 2009-09-23 14:22:38 +0000 | |||
2190 | +++ desktopcouch/records/tests/test_couchgrid.py 2009-10-12 14:29:10 +0000 | |||
2191 | @@ -128,6 +128,27 @@ | |||
2192 | 128 | self.assertEqual(cw.get_model().get_n_columns(),4) | 128 | self.assertEqual(cw.get_model().get_n_columns(),4) |
2193 | 129 | self.assertEqual(len(cw.get_model()),2) | 129 | self.assertEqual(len(cw.get_model()),2) |
2194 | 130 | 130 | ||
2195 | 131 | def test_selected_id_property(self): | ||
2196 | 132 | #create some records | ||
2197 | 133 | db = CouchDatabase(self.dbname, create=True) | ||
2198 | 134 | id1 = db.put_record(Record({ | ||
2199 | 135 | "key1_1": "val1_1", "key1_2": "val1_2", "key1_3": "val1_3", | ||
2200 | 136 | "record_type": self.record_type})) | ||
2201 | 137 | id2 = db.put_record(Record({ | ||
2202 | 138 | "key1_1": "val2_1", "key1_2": "val2_2", "key1_3": "val2_3", | ||
2203 | 139 | "record_type": self.record_type})) | ||
2204 | 140 | |||
2205 | 141 | #build the CouchGrid | ||
2206 | 142 | cw = CouchGrid(self.dbname) | ||
2207 | 143 | cw.record_type = self.record_type | ||
2208 | 144 | |||
2209 | 145 | #make sure the record ids are selected properly | ||
2210 | 146 | cw.selected_record_ids = [id1] | ||
2211 | 147 | self.assertEqual(cw.selected_record_ids[0], id1) | ||
2212 | 148 | cw.selected_record_ids = [id2] | ||
2213 | 149 | self.assertEqual(cw.selected_record_ids[0], id2) | ||
2214 | 150 | |||
2215 | 151 | |||
2216 | 131 | def test_single_col_from_database(self): | 152 | def test_single_col_from_database(self): |
2217 | 132 | #create some records | 153 | #create some records |
2218 | 133 | self.db.put_record(Record({ | 154 | self.db.put_record(Record({ |
2219 | 134 | 155 | ||
2220 | === modified file 'desktopcouch/records/tests/test_field_registry.py' | |||
2221 | --- desktopcouch/records/tests/test_field_registry.py 2009-08-27 15:32:11 +0000 | |||
2222 | +++ desktopcouch/records/tests/test_field_registry.py 2009-10-12 14:29:10 +0000 | |||
2223 | @@ -17,7 +17,7 @@ | |||
2224 | 17 | 17 | ||
2225 | 18 | """Test cases for field mapping""" | 18 | """Test cases for field mapping""" |
2226 | 19 | 19 | ||
2228 | 20 | import copy | 20 | import copy, doctest |
2229 | 21 | from testtools import TestCase | 21 | from testtools import TestCase |
2230 | 22 | from desktopcouch.records.field_registry import ( | 22 | from desktopcouch.records.field_registry import ( |
2231 | 23 | SimpleFieldMapping, MergeableListFieldMapping, Transformer) | 23 | SimpleFieldMapping, MergeableListFieldMapping, Transformer) |
2232 | @@ -111,3 +111,7 @@ | |||
2233 | 111 | self.transformer.to_app(record, data) | 111 | self.transformer.to_app(record, data) |
2234 | 112 | self.assertEqual( | 112 | self.assertEqual( |
2235 | 113 | {'simpleField': 23, 'strawberryField': 'the value'}, data) | 113 | {'simpleField': 23, 'strawberryField': 'the value'}, data) |
2236 | 114 | |||
2237 | 115 | def test_run_doctests(self): | ||
2238 | 116 | results = doctest.testfile('../doc/field_registry.txt') | ||
2239 | 117 | self.assertEqual(0, results.failed) | ||
2240 | 114 | 118 | ||
2241 | === modified file 'desktopcouch/records/tests/test_record.py' | |||
2242 | --- desktopcouch/records/tests/test_record.py 2009-08-27 15:32:11 +0000 | |||
2243 | +++ desktopcouch/records/tests/test_record.py 2009-10-12 14:29:10 +0000 | |||
2244 | @@ -19,6 +19,7 @@ | |||
2245 | 19 | """Tests for the RecordDict object on which the Contacts API is built.""" | 19 | """Tests for the RecordDict object on which the Contacts API is built.""" |
2246 | 20 | 20 | ||
2247 | 21 | from testtools import TestCase | 21 | from testtools import TestCase |
2248 | 22 | import doctest | ||
2249 | 22 | 23 | ||
2250 | 23 | # pylint does not like relative imports from containing packages | 24 | # pylint does not like relative imports from containing packages |
2251 | 24 | # pylint: disable-msg=F0401 | 25 | # pylint: disable-msg=F0401 |
2252 | @@ -179,6 +180,10 @@ | |||
2253 | 179 | self.assertEqual('http://fnord.org/smorgasbord', | 180 | self.assertEqual('http://fnord.org/smorgasbord', |
2254 | 180 | self.record.record_type) | 181 | self.record.record_type) |
2255 | 181 | 182 | ||
2256 | 183 | def test_run_doctests(self): | ||
2257 | 184 | results = doctest.testfile('../doc/records.txt') | ||
2258 | 185 | self.assertEqual(0, results.failed) | ||
2259 | 186 | |||
2260 | 182 | 187 | ||
2261 | 183 | class TestRecordFactory(TestCase): | 188 | class TestRecordFactory(TestCase): |
2262 | 184 | """Test Record/Mergeable List factories.""" | 189 | """Test Record/Mergeable List factories.""" |
2263 | 185 | 190 | ||
2264 | === modified file 'desktopcouch/records/tests/test_server.py' (properties changed: +x to -x) | |||
2265 | --- desktopcouch/records/tests/test_server.py 2009-09-23 14:22:38 +0000 | |||
2266 | +++ desktopcouch/records/tests/test_server.py 2009-10-12 14:29:10 +0000 | |||
2267 | @@ -89,6 +89,14 @@ | |||
2268 | 89 | self.assert_(deleted_record['application_annotations']['Ubuntu One'][ | 89 | self.assert_(deleted_record['application_annotations']['Ubuntu One'][ |
2269 | 90 | 'private_application_annotations']['deleted']) | 90 | 'private_application_annotations']['deleted']) |
2270 | 91 | 91 | ||
2271 | 92 | def test_get_deleted_record(self): | ||
2272 | 93 | """Test (not) getting a deleted record.""" | ||
2273 | 94 | record = Record({'record_number': 0}, record_type="http://example.com/") | ||
2274 | 95 | record_id = self.database.put_record(record) | ||
2275 | 96 | self.database.delete_record(record_id) | ||
2276 | 97 | retrieved_record = self.database.get_record(record_id) | ||
2277 | 98 | self.assertEqual(None, retrieved_record) | ||
2278 | 99 | |||
2279 | 92 | def test_record_exists(self): | 100 | def test_record_exists(self): |
2280 | 93 | """Test checking whether a record exists.""" | 101 | """Test checking whether a record exists.""" |
2281 | 94 | record = Record({'record_number': 0}, record_type="http://example.com/") | 102 | record = Record({'record_number': 0}, record_type="http://example.com/") |
2282 | 95 | 103 | ||
2283 | === added file 'desktopcouch/replication.py' | |||
2284 | --- desktopcouch/replication.py 1970-01-01 00:00:00 +0000 | |||
2285 | +++ desktopcouch/replication.py 2009-10-12 14:29:10 +0000 | |||
2286 | @@ -0,0 +1,248 @@ | |||
2287 | 1 | # Copyright 2009 Canonical Ltd. | ||
2288 | 2 | # | ||
2289 | 3 | # This file is part of desktopcouch. | ||
2290 | 4 | # | ||
2291 | 5 | # desktopcouch is free software: you can redistribute it and/or modify | ||
2292 | 6 | # it under the terms of the GNU Lesser General Public License version 3 | ||
2293 | 7 | # as published by the Free Software Foundation. | ||
2294 | 8 | # | ||
2295 | 9 | # desktopcouch is distributed in the hope that it will be useful, | ||
2296 | 10 | # but WITHOUT ANY WARRANTY; without even the implied warranty of | ||
2297 | 11 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the | ||
2298 | 12 | # GNU Lesser General Public License for more details. | ||
2299 | 13 | # | ||
2300 | 14 | # You should have received a copy of the GNU Lesser General Public License | ||
2301 | 15 | # along with desktopcouch. If not, see <http://www.gnu.org/licenses/>. | ||
2302 | 16 | # | ||
2303 | 17 | # Authors: Chad Miller <chad.miller@canonical.com> | ||
2304 | 18 | |||
2305 | 19 | import logging | ||
2306 | 20 | log = logging.getLogger("replication") | ||
2307 | 21 | |||
2308 | 22 | import dbus.exceptions | ||
2309 | 23 | |||
2310 | 24 | from desktopcouch.pair.couchdb_pairing import couchdb_io | ||
2311 | 25 | from desktopcouch.pair.couchdb_pairing import dbus_io | ||
2312 | 26 | from desktopcouch import replication_services | ||
2313 | 27 | |||
2314 | 28 | try: | ||
2315 | 29 | import urlparse | ||
2316 | 30 | except ImportError: | ||
2317 | 31 | import urllib.parse as urlparse | ||
2318 | 32 | |||
2319 | 33 | from twisted.internet import task, reactor | ||
2320 | 34 | |||
2321 | 35 | |||
2322 | 36 | known_bad_service_names = set() | ||
2323 | 37 | already_replicating = False | ||
2324 | 38 | is_running = True | ||
2325 | 39 | |||
2326 | 40 | |||
2327 | 41 | def db_targetprefix_for_service(service_name): | ||
2328 | 42 | """Use the service name to look up what the prefix should be on the | ||
2329 | 43 | databases. This gives an egalitarian way for non-UbuntuOne servers to have | ||
2330 | 44 | their own remote-db-name scheme.""" | ||
2331 | 45 | try: | ||
2332 | 46 | container = "desktopcouch.replication_services" | ||
2333 | 47 | log.debug("Looking up prefix for service %r", service_name) | ||
2334 | 48 | mod = __import__(container, fromlist=[service_name]) | ||
2335 | 49 | return getattr(mod, service_name).db_name_prefix | ||
2336 | 50 | except ImportError, e: | ||
2337 | 51 | log.error("The service %r is unknown. It is not a " | ||
2338 | 52 | "module in the %s package ." % (service_name, container)) | ||
2339 | 53 | return "" | ||
2340 | 54 | except Exception, e: | ||
2341 | 55 | log.exception("Not changing remote db name.") | ||
2342 | 56 | return "" | ||
2343 | 57 | |||
2344 | 58 | def oauth_info_for_service(service_name): | ||
2345 | 59 | """Use the service name to look up what oauth information we should use | ||
2346 | 60 | when talking to that service.""" | ||
2347 | 61 | try: | ||
2348 | 62 | container = "desktopcouch.replication_services" | ||
2349 | 63 | log.debug("Looking up prefix for service %r", service_name) | ||
2350 | 64 | mod = __import__(container, fromlist=[service_name]) | ||
2351 | 65 | return getattr(mod, service_name).get_oauth_data() | ||
2352 | 66 | except ImportError, e: | ||
2353 | 67 | log.error("The service %r is unknown. It is not a " | ||
2354 | 68 | "module in the %s package ." % (service_name, container)) | ||
2355 | 69 | return None | ||
2356 | 70 | |||
2357 | 71 | def do_all_replication(local_port): | ||
2358 | 72 | log.debug("started replicating") | ||
2359 | 73 | try: | ||
2360 | 74 | global already_replicating # Fuzzy, as not really critical, | ||
2361 | 75 | already_replicating = True # just trying to be polite. | ||
2362 | 76 | |||
2363 | 77 | try: | ||
2364 | 78 | # All machines running desktopcouch must advertise themselves with | ||
2365 | 79 | # zeroconf. We collect those elsewhere and filter out the ones | ||
2366 | 80 | # that we have paired with. Now, it's time to send our changes to | ||
2367 | 81 | # all those. | ||
2368 | 82 | |||
2369 | 83 | for remote_hostid, addr, port, is_unpaired, remote_oauth in \ | ||
2370 | 84 | dbus_io.get_seen_paired_hosts(): | ||
2371 | 85 | |||
2372 | 86 | if is_unpaired: | ||
2373 | 87 | # The far end doesn't know want to break up. | ||
2374 | 88 | count = 0 | ||
2375 | 89 | for local_identifier in couchdb_io.get_my_host_unique_id(): | ||
2376 | 90 | last_exception = None | ||
2377 | 91 | try: | ||
2378 | 92 | # Tell her gently, using each pseudonym. | ||
2379 | 93 | couchdb_io.expunge_pairing(local_identifier, | ||
2380 | 94 | couchdb_io.mkuri(addr, port), remote_oauth) | ||
2381 | 95 | count += 1 | ||
2382 | 96 | except Exception, e: | ||
2383 | 97 | last_exception = e | ||
2384 | 98 | if count == 0: | ||
2385 | 99 | if last_exception is not None: | ||
2386 | 100 | # If she didn't recognize us, something's wrong. | ||
2387 | 101 | try: | ||
2388 | 102 | raise last_exception | ||
2389 | 103 | # push caught exception back... | ||
2390 | 104 | except: | ||
2391 | 105 | # ... so that we log it here. | ||
2392 | 106 | logging.exception( | ||
2393 | 107 | "failed to unpair from other end.") | ||
2394 | 108 | continue | ||
2395 | 109 | else: | ||
2396 | 110 | # Finally, find your inner peace... | ||
2397 | 111 | couchdb_io.expunge_pairing(remote_hostid) | ||
2398 | 112 | # ...and move on. | ||
2399 | 113 | continue | ||
2400 | 114 | |||
2401 | 115 | # Ah, good, this is an active relationship. Be a giver. | ||
2402 | 116 | log.debug("want to replipush to discovered host %r @ %s", | ||
2403 | 117 | remote_hostid, addr) | ||
2404 | 118 | for db_name in couchdb_io.get_database_names_replicatable( | ||
2405 | 119 | couchdb_io.mkuri("localhost", local_port)): | ||
2406 | 120 | if not is_running: return | ||
2407 | 121 | couchdb_io.replicate(db_name, db_name, | ||
2408 | 122 | target_host=addr, target_port=port, | ||
2409 | 123 | source_port=local_port, target_oauth=remote_oauth) | ||
2410 | 124 | log.debug("replication of discovered hosts finished") | ||
2411 | 125 | except Exception, e: | ||
2412 | 126 | log.exception("replication of discovered hosts aborted") | ||
2413 | 127 | pass | ||
2414 | 128 | |||
2415 | 129 | try: | ||
2416 | 130 | # There may be services we send data to. Use the service name (sn) | ||
2417 | 131 | # to look up what the service needs from us. | ||
2418 | 132 | |||
2419 | 133 | for remote_hostid, sn, to_pull, to_push in \ | ||
2420 | 134 | couchdb_io.get_static_paired_hosts(): | ||
2421 | 135 | |||
2422 | 136 | if not sn in dir(replication_services): | ||
2423 | 137 | if not is_running: return | ||
2424 | 138 | if sn in known_bad_service_names: | ||
2425 | 139 | continue # Don't nag. | ||
2426 | 140 | known_bad_service_names.add(sn) | ||
2427 | 141 | |||
2428 | 142 | remote_oauth_data = oauth_info_for_service(sn) | ||
2429 | 143 | |||
2430 | 144 | # TODO: push all this into service module. | ||
2431 | 145 | try: | ||
2432 | 146 | remote_location = db_targetprefix_for_service(sn) | ||
2433 | 147 | urlinfo = urlparse.urlsplit(str(remote_location)) | ||
2434 | 148 | except ValueError, e: | ||
2435 | 149 | log.warn("Can't reach service %s. %s", sn, e) | ||
2436 | 150 | continue | ||
2437 | 151 | if ":" in urlinfo.netloc: | ||
2438 | 152 | addr, port = urlinfo.netloc.rsplit(":", 1) | ||
2439 | 153 | else: | ||
2440 | 154 | addr = urlinfo.netloc | ||
2441 | 155 | port = 443 if urlinfo.scheme == "https" else 80 | ||
2442 | 156 | remote_db_name_prefix = urlinfo.path.strip("/") | ||
2443 | 157 | # ^ | ||
2444 | 158 | |||
2445 | 159 | if to_pull: | ||
2446 | 160 | for db_name in couchdb_io.get_database_names_replicatable( | ||
2447 | 161 | couchdb_io.mkuri("localhost", int(local_port))): | ||
2448 | 162 | if not is_running: return | ||
2449 | 163 | |||
2450 | 164 | remote_db_name = remote_db_name_prefix + "/" + db_name | ||
2451 | 165 | |||
2452 | 166 | log.debug("want to replipush %r to static host %r @ %s", | ||
2453 | 167 | remote_db_name, remote_hostid, addr) | ||
2454 | 168 | couchdb_io.replicate(db_name, remote_db_name, | ||
2455 | 169 | target_host=addr, target_port=port, | ||
2456 | 170 | source_port=local_port, target_ssl=True, | ||
2457 | 171 | target_oauth=remote_oauth_data) | ||
2458 | 172 | if to_push: | ||
2459 | 173 | for remote_db_name in \ | ||
2460 | 174 | couchdb_io.get_database_names_replicatable( | ||
2461 | 175 | couchdb_io.mkuri("localhost", | ||
2462 | 176 | int(local_port))): | ||
2463 | 177 | if not is_running: return | ||
2464 | 178 | try: | ||
2465 | 179 | if not remote_db_name.startswith( | ||
2466 | 180 | str(remote_db_name_prefix + "/")): | ||
2467 | 181 | continue | ||
2468 | 182 | except ValueError, e: | ||
2469 | 183 | log.error("skipping %r on %s. %s", db_name, sn, e) | ||
2470 | 184 | continue | ||
2471 | 185 | |||
2472 | 186 | prefix_len = len(str(remote_db_name_prefix)) | ||
2473 | 187 | db_name = remote_db_name[1+prefix_len:] | ||
2474 | 188 | if db_name.strip("/") == "management": | ||
2475 | 189 | continue # be paranoid about what we accept. | ||
2476 | 190 | log.debug( | ||
2477 | 191 | "want to replipull %r from static host %r @ %s", | ||
2478 | 192 | db_name, remote_hostid, addr) | ||
2479 | 193 | couchdb_io.replicate(remote_db_name, db_name, | ||
2480 | 194 | source_host=addr, source_port=port, | ||
2481 | 195 | target_port=local_port, source_ssl=True, | ||
2482 | 196 | source_oauth=remote_oauth_data) | ||
2483 | 197 | |||
2484 | 198 | except Exception, e: | ||
2485 | 199 | log.exception("replication of services aborted") | ||
2486 | 200 | pass | ||
2487 | 201 | finally: | ||
2488 | 202 | already_replicating = False | ||
2489 | 203 | log.debug("finished replicating") | ||
2490 | 204 | |||
2491 | 205 | |||
2492 | 206 | def replicate_local_databases_to_paired_hosts(local_port): | ||
2493 | 207 | if already_replicating: | ||
2494 | 208 | log.warn("haven't finished replicating before next time to start.") | ||
2495 | 209 | return False | ||
2496 | 210 | |||
2497 | 211 | reactor.callInThread(do_all_replication, local_port) | ||
2498 | 212 | |||
2499 | 213 | def set_up(port_getter): | ||
2500 | 214 | port = port_getter() | ||
2501 | 215 | unique_identifiers = couchdb_io.get_my_host_unique_id( | ||
2502 | 216 | couchdb_io.mkuri("localhost", int(port)), create=True) | ||
2503 | 217 | |||
2504 | 218 | beacons = [dbus_io.LocationAdvertisement(port, "desktopcouch " + i) | ||
2505 | 219 | for i in unique_identifiers] | ||
2506 | 220 | for b in beacons: | ||
2507 | 221 | try: | ||
2508 | 222 | b.publish() | ||
2509 | 223 | except dbus.exceptions.DBusException, e: | ||
2510 | 224 | log.error("We seem to be running already, or can't publish " | ||
2511 | 225 | "our zeroconf advert. %s", e) | ||
2512 | 226 | return None | ||
2513 | 227 | |||
2514 | 228 | dbus_io.maintain_discovered_servers() | ||
2515 | 229 | |||
2516 | 230 | t = task.LoopingCall(replicate_local_databases_to_paired_hosts, port) | ||
2517 | 231 | t.start(600) | ||
2518 | 232 | |||
2519 | 233 | # TODO: port may change, so every so often, check it and | ||
2520 | 234 | # perhaps refresh the beacons. We return an array of beacons, so we could | ||
2521 | 235 | # keep a reference to that array and mutate it when the port-beacons | ||
2522 | 236 | # change. | ||
2523 | 237 | |||
2524 | 238 | return beacons, t | ||
2525 | 239 | |||
2526 | 240 | |||
2527 | 241 | def tear_down(beacons, looping_task): | ||
2528 | 242 | for b in beacons: | ||
2529 | 243 | b.unpublish() | ||
2530 | 244 | try: | ||
2531 | 245 | is_running = False | ||
2532 | 246 | looping_task.stop() | ||
2533 | 247 | except: | ||
2534 | 248 | pass | ||
2535 | 0 | 249 | ||
2536 | === removed file 'desktopcouch/replication.py' | |||
2537 | --- desktopcouch/replication.py 2009-09-28 12:06:08 +0000 | |||
2538 | +++ desktopcouch/replication.py 1970-01-01 00:00:00 +0000 | |||
2539 | @@ -1,242 +0,0 @@ | |||
2540 | 1 | # Copyright 2009 Canonical Ltd. | ||
2541 | 2 | # | ||
2542 | 3 | # This file is part of desktopcouch. | ||
2543 | 4 | # | ||
2544 | 5 | # desktopcouch is free software: you can redistribute it and/or modify | ||
2545 | 6 | # it under the terms of the GNU Lesser General Public License version 3 | ||
2546 | 7 | # as published by the Free Software Foundation. | ||
2547 | 8 | # | ||
2548 | 9 | # desktopcouch is distributed in the hope that it will be useful, | ||
2549 | 10 | # but WITHOUT ANY WARRANTY; without even the implied warranty of | ||
2550 | 11 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the | ||
2551 | 12 | # GNU Lesser General Public License for more details. | ||
2552 | 13 | # | ||
2553 | 14 | # You should have received a copy of the GNU Lesser General Public License | ||
2554 | 15 | # along with desktopcouch. If not, see <http://www.gnu.org/licenses/>. | ||
2555 | 16 | # | ||
2556 | 17 | # Authors: Chad Miller <chad.miller@canonical.com> | ||
2557 | 18 | |||
2558 | 19 | import threading | ||
2559 | 20 | import logging | ||
2560 | 21 | import logging.handlers | ||
2561 | 22 | log = logging.getLogger("replication") | ||
2562 | 23 | |||
2563 | 24 | import dbus.exceptions | ||
2564 | 25 | |||
2565 | 26 | import desktopcouch | ||
2566 | 27 | from desktopcouch.pair.couchdb_pairing import couchdb_io | ||
2567 | 28 | from desktopcouch.pair.couchdb_pairing import dbus_io | ||
2568 | 29 | from desktopcouch import replication_services | ||
2569 | 30 | |||
2570 | 31 | try: | ||
2571 | 32 | import urlparse | ||
2572 | 33 | except ImportError: | ||
2573 | 34 | import urllib.parse as urlparse | ||
2574 | 35 | |||
2575 | 36 | from twisted.internet import task, reactor | ||
2576 | 37 | |||
2577 | 38 | |||
2578 | 39 | known_bad_service_names = set() | ||
2579 | 40 | already_replicating = False | ||
2580 | 41 | is_running = True | ||
2581 | 42 | |||
2582 | 43 | |||
2583 | 44 | def db_targetprefix_for_service(service_name): | ||
2584 | 45 | """Use the service name to look up what the prefix should be on the | ||
2585 | 46 | databases. This gives an egalitarian way for non-UbuntuOne servers to have | ||
2586 | 47 | their own remote-db-name scheme.""" | ||
2587 | 48 | try: | ||
2588 | 49 | container = "desktopcouch.replication_services" | ||
2589 | 50 | log.debug("Looking up prefix for service %r", service_name) | ||
2590 | 51 | mod = __import__(container, fromlist=[service_name]) | ||
2591 | 52 | return getattr(mod, service_name).db_name_prefix | ||
2592 | 53 | except ImportError, e: | ||
2593 | 54 | log.error("The service %r is unknown. It is not a " | ||
2594 | 55 | "module in the %s package ." % (sn, container)) | ||
2595 | 56 | return "" | ||
2596 | 57 | except Exception, e: | ||
2597 | 58 | log.exception("Not changing remote db name.") | ||
2598 | 59 | return "" | ||
2599 | 60 | |||
2600 | 61 | def oauth_info_for_service(service_name): | ||
2601 | 62 | """Use the service name to look up what oauth information we should use | ||
2602 | 63 | when talking to that service.""" | ||
2603 | 64 | try: | ||
2604 | 65 | container = "desktopcouch.replication_services" | ||
2605 | 66 | log.debug("Looking up prefix for service %r", service_name) | ||
2606 | 67 | mod = __import__(container, fromlist=[service_name]) | ||
2607 | 68 | return getattr(mod, service_name).get_oauth_data() | ||
2608 | 69 | except ImportError, e: | ||
2609 | 70 | log.error("The service %r is unknown. It is not a " | ||
2610 | 71 | "module in the %s package ." % (sn, container)) | ||
2611 | 72 | return None | ||
2612 | 73 | |||
2613 | 74 | def do_all_replication(local_port): | ||
2614 | 75 | log.debug("started replicating") | ||
2615 | 76 | try: | ||
2616 | 77 | global already_replicating # Fuzzy, as not really critical, | ||
2617 | 78 | already_replicating = True # just trying to be polite. | ||
2618 | 79 | |||
2619 | 80 | try: | ||
2620 | 81 | # All machines running desktopcouch must advertise themselves with | ||
2621 | 82 | # zeroconf. We collect those elsewhere and filter out the ones | ||
2622 | 83 | # that we have paired with. Now, it's time to send our changes to | ||
2623 | 84 | # all those. | ||
2624 | 85 | |||
2625 | 86 | for remote_hostid, addr, port, is_unpaired in \ | ||
2626 | 87 | dbus_io.get_seen_paired_hosts(): | ||
2627 | 88 | |||
2628 | 89 | if is_unpaired: | ||
2629 | 90 | # The far end doesn't know want to break up. | ||
2630 | 91 | count = 0 | ||
2631 | 92 | for local_identifier in couchdb_io.get_my_host_unique_id(): | ||
2632 | 93 | last_exception = None | ||
2633 | 94 | try: | ||
2634 | 95 | # Tell her gently, using each pseudonym. | ||
2635 | 96 | couchdb_io.expunge_pairing(local_identifier, | ||
2636 | 97 | couchdb_io.mkuri(addr, port)) | ||
2637 | 98 | count += 1 | ||
2638 | 99 | except Exception, e: | ||
2639 | 100 | last_exception = e | ||
2640 | 101 | if count == 0: | ||
2641 | 102 | if last_exception is not None: | ||
2642 | 103 | # If she didn't recognize us, something's wrong. | ||
2643 | 104 | raise last_exception | ||
2644 | 105 | else: | ||
2645 | 106 | # Finally, find your inner peace... | ||
2646 | 107 | couchdb_io.expunge_pairing(remote_identifier) | ||
2647 | 108 | # ...and move on. | ||
2648 | 109 | continue | ||
2649 | 110 | |||
2650 | 111 | # Ah, good, this is an active relationship. Be a giver. | ||
2651 | 112 | log.debug("want to replipush to discovered host %r @ %s", | ||
2652 | 113 | remote_hostid, addr) | ||
2653 | 114 | for db_name in couchdb_io.get_database_names_replicatable( | ||
2654 | 115 | couchdb_io.mkuri("localhost", local_port)): | ||
2655 | 116 | if not is_running: return | ||
2656 | 117 | couchdb_io.replicate(db_name, db_name, | ||
2657 | 118 | target_host=addr, target_port=port, | ||
2658 | 119 | source_port=local_port) | ||
2659 | 120 | except Exception, e: | ||
2660 | 121 | log.exception("replication of discovered hosts aborted") | ||
2661 | 122 | pass | ||
2662 | 123 | |||
2663 | 124 | try: | ||
2664 | 125 | # There may be services we send data to. Use the service name (sn) | ||
2665 | 126 | # to look up what the service needs from us. | ||
2666 | 127 | |||
2667 | 128 | for remote_hostid, sn, to_pull, to_push in \ | ||
2668 | 129 | couchdb_io.get_static_paired_hosts(): | ||
2669 | 130 | |||
2670 | 131 | if not sn in dir(replication_services): | ||
2671 | 132 | if not is_running: return | ||
2672 | 133 | if sn in known_bad_service_names: | ||
2673 | 134 | continue # Don't nag. | ||
2674 | 135 | known_bad_service_names.add(sn) | ||
2675 | 136 | |||
2676 | 137 | remote_oauth_data = oauth_info_for_service(sn) | ||
2677 | 138 | |||
2678 | 139 | # TODO: push all this into service module. | ||
2679 | 140 | try: | ||
2680 | 141 | remote_location = db_targetprefix_for_service(sn) | ||
2681 | 142 | urlinfo = urlparse.urlsplit(str(remote_location)) | ||
2682 | 143 | except ValueError, e: | ||
2683 | 144 | log.warn("Can't reach service %s. %s", sn, e) | ||
2684 | 145 | continue | ||
2685 | 146 | if ":" in urlinfo.netloc: | ||
2686 | 147 | addr, port = urlinfo.netloc.rsplit(":", 1) | ||
2687 | 148 | else: | ||
2688 | 149 | addr = urlinfo.netloc | ||
2689 | 150 | port = 443 if urlinfo.scheme == "https" else 80 | ||
2690 | 151 | remote_db_name_prefix = urlinfo.path.strip("/") | ||
2691 | 152 | # ^ | ||
2692 | 153 | |||
2693 | 154 | if to_pull: | ||
2694 | 155 | for db_name in couchdb_io.get_database_names_replicatable( | ||
2695 | 156 | couchdb_io.mkuri("localhost", int(local_port))): | ||
2696 | 157 | if not is_running: return | ||
2697 | 158 | |||
2698 | 159 | remote_db_name = remote_db_name_prefix + "/" + db_name | ||
2699 | 160 | |||
2700 | 161 | log.debug("want to replipush %r to static host %r @ %s", | ||
2701 | 162 | remote_db_name, remote_hostid, addr) | ||
2702 | 163 | couchdb_io.replicate(db_name, remote_db_name, | ||
2703 | 164 | target_host=addr, target_port=port, | ||
2704 | 165 | source_port=local_port, target_ssl=True, | ||
2705 | 166 | target_oauth=remote_oauth_data) | ||
2706 | 167 | if to_push: | ||
2707 | 168 | for remote_db_name in \ | ||
2708 | 169 | couchdb_io.get_database_names_replicatable( | ||
2709 | 170 | couchdb_io.mkuri(addr, port)): | ||
2710 | 171 | if not is_running: return | ||
2711 | 172 | try: | ||
2712 | 173 | if not remote_db_name.startswith( | ||
2713 | 174 | str(remote_db_name_prefix + "/")): | ||
2714 | 175 | continue | ||
2715 | 176 | except ValueError, e: | ||
2716 | 177 | log.error("skipping %r on %s. %s", db_name, sn, e) | ||
2717 | 178 | continue | ||
2718 | 179 | |||
2719 | 180 | db_name = remote_db_name[1+len(str(remote_db_name_prefix)):] | ||
2720 | 181 | if db_name.strip("/") == "management": | ||
2721 | 182 | continue # be paranoid about what we accept. | ||
2722 | 183 | log.debug("want to replipull %r from static host %r @ %s", | ||
2723 | 184 | db_name, remote_hostid, addr) | ||
2724 | 185 | couchdb_io.replicate(remote_db_name, db_name, | ||
2725 | 186 | source_host=addr, source_port=port, | ||
2726 | 187 | target_port=local_port, source_ssl=True, | ||
2727 | 188 | source_oauth=remote_oauth_data) | ||
2728 | 189 | |||
2729 | 190 | except Exception, e: | ||
2730 | 191 | log.exception("replication of services aborted") | ||
2731 | 192 | pass | ||
2732 | 193 | finally: | ||
2733 | 194 | already_replicating = False | ||
2734 | 195 | log.debug("finished replicating") | ||
2735 | 196 | |||
2736 | 197 | |||
2737 | 198 | def replicate_local_databases_to_paired_hosts(local_port): | ||
2738 | 199 | if already_replicating: | ||
2739 | 200 | log.warn("haven't finished replicating before next time to start.") | ||
2740 | 201 | return False | ||
2741 | 202 | |||
2742 | 203 | reactor.callInThread(do_all_replication, local_port) | ||
2743 | 204 | |||
2744 | 205 | def set_up(port_getter): | ||
2745 | 206 | port = port_getter() | ||
2746 | 207 | unique_identifiers = couchdb_io.get_my_host_unique_id( | ||
2747 | 208 | couchdb_io.mkuri("localhost", int(port)), create=True) | ||
2748 | 209 | |||
2749 | 210 | beacons = [dbus_io.LocationAdvertisement(port, "desktopcouch " + i) | ||
2750 | 211 | for i in unique_identifiers] | ||
2751 | 212 | for b in beacons: | ||
2752 | 213 | try: | ||
2753 | 214 | b.publish() | ||
2754 | 215 | except dbus.exceptions.DBusException, e: | ||
2755 | 216 | log.error("We seem to be running already, or can't publish " | ||
2756 | 217 | "our zeroconf advert. %s", e) | ||
2757 | 218 | return None | ||
2758 | 219 | |||
2759 | 220 | dbus_io.discover_services(None, None, True) | ||
2760 | 221 | |||
2761 | 222 | dbus_io.maintain_discovered_servers() | ||
2762 | 223 | |||
2763 | 224 | t = task.LoopingCall(replicate_local_databases_to_paired_hosts, port) | ||
2764 | 225 | t.start(600) | ||
2765 | 226 | |||
2766 | 227 | # TODO: port may change, so every so often, check it and | ||
2767 | 228 | # perhaps refresh the beacons. We return an array of beacons, so we could | ||
2768 | 229 | # keep a reference to that array and mutate it when the port-beacons | ||
2769 | 230 | # change. | ||
2770 | 231 | |||
2771 | 232 | return beacons, t | ||
2772 | 233 | |||
2773 | 234 | |||
2774 | 235 | def tear_down(beacons, looping_task): | ||
2775 | 236 | for b in beacons: | ||
2776 | 237 | b.unpublish() | ||
2777 | 238 | try: | ||
2778 | 239 | is_running = False | ||
2779 | 240 | looping_task.stop() | ||
2780 | 241 | except: | ||
2781 | 242 | pass | ||
2782 | 243 | 0 | ||
2783 | === added directory 'desktopcouch/replication_services' | |||
2784 | === removed directory 'desktopcouch/replication_services' | |||
2785 | === added file 'desktopcouch/replication_services/__init__.py' | |||
2786 | --- desktopcouch/replication_services/__init__.py 1970-01-01 00:00:00 +0000 | |||
2787 | +++ desktopcouch/replication_services/__init__.py 2009-10-12 14:29:10 +0000 | |||
2788 | @@ -0,0 +1,4 @@ | |||
2789 | 1 | """Modules imported here are available as services.""" | ||
2790 | 2 | |||
2791 | 3 | import ubuntuone | ||
2792 | 4 | import example | ||
2793 | 0 | 5 | ||
2794 | === removed file 'desktopcouch/replication_services/__init__.py' | |||
2795 | --- desktopcouch/replication_services/__init__.py 2009-09-23 14:22:38 +0000 | |||
2796 | +++ desktopcouch/replication_services/__init__.py 1970-01-01 00:00:00 +0000 | |||
2797 | @@ -1,4 +0,0 @@ | |||
2798 | 1 | """Modules imported here are available as services.""" | ||
2799 | 2 | |||
2800 | 3 | import ubuntuone | ||
2801 | 4 | import example | ||
2802 | 5 | 0 | ||
2803 | === added file 'desktopcouch/replication_services/example.py' | |||
2804 | --- desktopcouch/replication_services/example.py 1970-01-01 00:00:00 +0000 | |||
2805 | +++ desktopcouch/replication_services/example.py 2009-10-12 14:29:10 +0000 | |||
2806 | @@ -0,0 +1,26 @@ | |||
2807 | 1 | # Note that the __init__.py of this package must import this module for it to | ||
2808 | 2 | # be found. Plugin logic is not pretty, and not implemented yet. | ||
2809 | 3 | |||
2810 | 4 | # Required | ||
2811 | 5 | name = "Example" | ||
2812 | 6 | # Required; should include the words "cloud service" on the end. | ||
2813 | 7 | description = "Example cloud service" | ||
2814 | 8 | |||
2815 | 9 | # Required | ||
2816 | 10 | def is_active(): | ||
2817 | 11 | """Can we deliver information?""" | ||
2818 | 12 | return False | ||
2819 | 13 | |||
2820 | 14 | # Required | ||
2821 | 15 | def oauth_data(): | ||
2822 | 16 | """OAuth information needed to replicate to a server.""" | ||
2823 | 17 | return dict(consumer_key="", consumer_secret="", oauth_token="", | ||
2824 | 18 | oauth_token_secret="") | ||
2825 | 19 | # or to symbolize failure | ||
2826 | 20 | return None | ||
2827 | 21 | |||
2828 | 22 | # Access to this as a string fires off functions. | ||
2829 | 23 | # Required | ||
2830 | 24 | db_name_prefix = "http://host.required.example.com/a_prefix_if_necessary" | ||
2831 | 25 | # You can be sure that access to this will always, always be through its | ||
2832 | 26 | # __str__ method. | ||
2833 | 0 | 27 | ||
2834 | === removed file 'desktopcouch/replication_services/example.py' | |||
2835 | --- desktopcouch/replication_services/example.py 2009-09-28 12:06:08 +0000 | |||
2836 | +++ desktopcouch/replication_services/example.py 1970-01-01 00:00:00 +0000 | |||
2837 | @@ -1,26 +0,0 @@ | |||
2838 | 1 | # Note that the __init__.py of this package must import this module for it to | ||
2839 | 2 | # be found. Plugin logic is not pretty, and not implemented yet. | ||
2840 | 3 | |||
2841 | 4 | # Required | ||
2842 | 5 | name = "Example" | ||
2843 | 6 | # Required; should include the words "cloud service" on the end. | ||
2844 | 7 | description = "Example cloud service" | ||
2845 | 8 | |||
2846 | 9 | # Required | ||
2847 | 10 | def is_active(): | ||
2848 | 11 | """Can we deliver information?""" | ||
2849 | 12 | return False | ||
2850 | 13 | |||
2851 | 14 | # Required | ||
2852 | 15 | def oauth_data(): | ||
2853 | 16 | """OAuth information needed to replicate to a server.""" | ||
2854 | 17 | return dict(consumer_key="", consumer_secret="", oauth_token="", | ||
2855 | 18 | oauth_token_secret="") | ||
2856 | 19 | # or to symbolize failure | ||
2857 | 20 | return None | ||
2858 | 21 | |||
2859 | 22 | # Access to this as a string fires off functions. | ||
2860 | 23 | # Required | ||
2861 | 24 | db_name_prefix = "http://host.required.example.com/a_prefix_if_necessary" | ||
2862 | 25 | # You can be sure that access to this will always, always be through its | ||
2863 | 26 | # __str__ method. | ||
2864 | 27 | 0 | ||
2865 | === added file 'desktopcouch/replication_services/ubuntuone.py' | |||
2866 | --- desktopcouch/replication_services/ubuntuone.py 1970-01-01 00:00:00 +0000 | |||
2867 | +++ desktopcouch/replication_services/ubuntuone.py 2009-10-12 14:29:10 +0000 | |||
2868 | @@ -0,0 +1,125 @@ | |||
2869 | 1 | import hashlib | ||
2870 | 2 | from oauth import oauth | ||
2871 | 3 | import logging | ||
2872 | 4 | import httplib2 | ||
2873 | 5 | import simplejson | ||
2874 | 6 | import gnomekeyring | ||
2875 | 7 | |||
2876 | 8 | name = "Ubuntu One" | ||
2877 | 9 | description = "The Ubuntu One cloud service" | ||
2878 | 10 | |||
2879 | 11 | oauth_consumer_key = "ubuntuone" | ||
2880 | 12 | oauth_consumer_secret = "hammertime" | ||
2881 | 13 | |||
2882 | 14 | def is_active(): | ||
2883 | 15 | """Can we deliver information?""" | ||
2884 | 16 | return get_oauth_data() is not None | ||
2885 | 17 | |||
2886 | 18 | oauth_data = None | ||
2887 | 19 | def get_oauth_data(): | ||
2888 | 20 | """Information needed to replicate to a server.""" | ||
2889 | 21 | global oauth_data | ||
2890 | 22 | if oauth_data is not None: | ||
2891 | 23 | return oauth_data | ||
2892 | 24 | |||
2893 | 25 | try: | ||
2894 | 26 | import gnomekeyring | ||
2895 | 27 | matches = gnomekeyring.find_items_sync( | ||
2896 | 28 | gnomekeyring.ITEM_GENERIC_SECRET, | ||
2897 | 29 | {'ubuntuone-realm': "https://ubuntuone.com", | ||
2898 | 30 | 'oauth-consumer-key': oauth_consumer_key}) | ||
2899 | 31 | if matches: | ||
2900 | 32 | # parse "a=b&c=d" to {"a":"b","c":"d"} | ||
2901 | 33 | kv_list = [x.split("=", 1) for x in matches[0].secret.split("&")] | ||
2902 | 34 | keys, values = zip(*kv_list) | ||
2903 | 35 | keys = [k.replace("oauth_", "") for k in keys] | ||
2904 | 36 | oauth_data = dict(zip(keys, values)) | ||
2905 | 37 | oauth_data.update({ | ||
2906 | 38 | "consumer_key": oauth_consumer_key, | ||
2907 | 39 | "consumer_secret": oauth_consumer_secret, | ||
2908 | 40 | }) | ||
2909 | 41 | return oauth_data | ||
2910 | 42 | except ImportError, e: | ||
2911 | 43 | logging.info("Can't replicate to Ubuntu One cloud without credentials." | ||
2912 | 44 | " %s", e) | ||
2913 | 45 | except gnomekeyring.NoMatchError: | ||
2914 | 46 | logging.info("This machine hasn't authorized itself to Ubuntu One; " | ||
2915 | 47 | "replication to the cloud isn't possible until it has. See " | ||
2916 | 48 | "'ubuntuone-client-applet'.") | ||
2917 | 49 | except gnomekeyring.NoKeyringDaemonError: | ||
2918 | 50 | logging.error("No keyring daemon found in this session, so we have " | ||
2919 | 51 | "no access to Ubuntu One data.") | ||
2920 | 52 | |||
2921 | 53 | def get_oauth_token(consumer): | ||
2922 | 54 | """Get the token from the keyring""" | ||
2923 | 55 | import gobject | ||
2924 | 56 | gobject.set_application_name("desktopcouch replication to Ubuntu One") | ||
2925 | 57 | try: | ||
2926 | 58 | items = gnomekeyring.find_items_sync( | ||
2927 | 59 | gnomekeyring.ITEM_GENERIC_SECRET, | ||
2928 | 60 | {'ubuntuone-realm': "https://one.ubuntu.com", | ||
2929 | 61 | 'oauth-consumer-key': consumer.key}) | ||
2930 | 62 | except gnomekeyring.NoMatchError: | ||
2931 | 63 | logging.info("No o.u.c key. Maybe there's uo.c key?") | ||
2932 | 64 | items = gnomekeyring.find_items_sync( | ||
2933 | 65 | gnomekeyring.ITEM_GENERIC_SECRET, | ||
2934 | 66 | {'ubuntuone-realm': "https://ubuntuone.com", | ||
2935 | 67 | 'oauth-consumer-key': consumer.key}) | ||
2936 | 68 | if len(items): | ||
2937 | 69 | return oauth.OAuthToken.from_string(items[0].secret) | ||
2938 | 70 | |||
2939 | 71 | def get_oauth_request_header(consumer, access_token, http_url): | ||
2940 | 72 | """Get an oauth request header given the token and the url""" | ||
2941 | 73 | signature_method = oauth.OAuthSignatureMethod_PLAINTEXT() | ||
2942 | 74 | assert http_url.startswith("https") | ||
2943 | 75 | oauth_request = oauth.OAuthRequest.from_consumer_and_token( | ||
2944 | 76 | http_url=http_url, | ||
2945 | 77 | http_method="GET", | ||
2946 | 78 | oauth_consumer=consumer, | ||
2947 | 79 | token=access_token) | ||
2948 | 80 | oauth_request.sign_request(signature_method, consumer, access_token) | ||
2949 | 81 | return oauth_request.to_header() | ||
2950 | 82 | |||
2951 | 83 | |||
2952 | 84 | class PrefixGetter(): | ||
2953 | 85 | def __init__(self): | ||
2954 | 86 | self.str = None | ||
2955 | 87 | self.oauth_header = None | ||
2956 | 88 | |||
2957 | 89 | def __str__(self): | ||
2958 | 90 | if self.str is not None: | ||
2959 | 91 | return self.str | ||
2960 | 92 | |||
2961 | 93 | url = "https://one.ubuntu.com/api/account/" | ||
2962 | 94 | if self.oauth_header is None: | ||
2963 | 95 | consumer = oauth.OAuthConsumer(oauth_consumer_key, | ||
2964 | 96 | oauth_consumer_secret) | ||
2965 | 97 | try: | ||
2966 | 98 | access_token = get_oauth_token(consumer) | ||
2967 | 99 | except gnomekeyring.NoKeyringDaemonError: | ||
2968 | 100 | logging.info("No keyring daemon is running for this session.") | ||
2969 | 101 | raise ValueError("No keyring access") | ||
2970 | 102 | if not access_token: | ||
2971 | 103 | logging.info("Could not get access token from keyring") | ||
2972 | 104 | raise ValueError("No keyring access") | ||
2973 | 105 | self.oauth_header = get_oauth_request_header(consumer, access_token, url) | ||
2974 | 106 | |||
2975 | 107 | client = httplib2.Http() | ||
2976 | 108 | resp, content = client.request(url, "GET", headers=self.oauth_header) | ||
2977 | 109 | if resp['status'] == "200": | ||
2978 | 110 | document = simplejson.loads(content) | ||
2979 | 111 | if "couchdb_root" not in document: | ||
2980 | 112 | raise ValueError("couchdb_root not found in %s" % (document,)) | ||
2981 | 113 | self.str = document["couchdb_root"] | ||
2982 | 114 | else: | ||
2983 | 115 | logging.error("Couldn't talk to %r. Got HTTP %s", url, resp['status']) | ||
2984 | 116 | raise ValueError("HTTP %s for %r" % (resp['status'], url)) | ||
2985 | 117 | |||
2986 | 118 | return self.str | ||
2987 | 119 | |||
2988 | 120 | # Access to this as a string fires off functions. | ||
2989 | 121 | db_name_prefix = PrefixGetter() | ||
2990 | 122 | |||
2991 | 123 | if __name__ == "__main__": | ||
2992 | 124 | logging.basicConfig(level=logging.DEBUG, format="%(message)s") | ||
2993 | 125 | print str(db_name_prefix) | ||
2994 | 0 | 126 | ||
2995 | === removed file 'desktopcouch/replication_services/ubuntuone.py' | |||
2996 | --- desktopcouch/replication_services/ubuntuone.py 2009-09-28 12:06:08 +0000 | |||
2997 | +++ desktopcouch/replication_services/ubuntuone.py 1970-01-01 00:00:00 +0000 | |||
2998 | @@ -1,125 +0,0 @@ | |||
2999 | 1 | import hashlib | ||
3000 | 2 | from oauth import oauth | ||
3001 | 3 | import logging | ||
3002 | 4 | import httplib2 | ||
3003 | 5 | import simplejson | ||
3004 | 6 | import gnomekeyring | ||
3005 | 7 | |||
3006 | 8 | name = "Ubuntu One" | ||
3007 | 9 | description = "The Ubuntu One cloud service" | ||
3008 | 10 | |||
3009 | 11 | oauth_consumer_key = "ubuntuone" | ||
3010 | 12 | oauth_consumer_secret = "hammertime" | ||
3011 | 13 | |||
3012 | 14 | def is_active(): | ||
3013 | 15 | """Can we deliver information?""" | ||
3014 | 16 | return get_oauth_data() is not None | ||
3015 | 17 | |||
3016 | 18 | oauth_data = None | ||
3017 | 19 | def get_oauth_data(): | ||
3018 | 20 | """Information needed to replicate to a server.""" | ||
3019 | 21 | global oauth_data | ||
3020 | 22 | if oauth_data is not None: | ||
3021 | 23 | return oauth_data | ||
3022 | 24 | |||
3023 | 25 | try: | ||
3024 | 26 | import gnomekeyring | ||
3025 | 27 | matches = gnomekeyring.find_items_sync( | ||
3026 | 28 | gnomekeyring.ITEM_GENERIC_SECRET, | ||
3027 | 29 | {'ubuntuone-realm': "https://ubuntuone.com", | ||
3028 | 30 | 'oauth-consumer-key': oauth_consumer_key}) | ||
3029 | 31 | if matches: | ||
3030 | 32 | # parse "a=b&c=d" to {"a":"b","c":"d"} | ||
3031 | 33 | kv_list = [x.split("=", 1) for x in matches[0].secret.split("&")] | ||
3032 | 34 | keys, values = zip(*kv_list) | ||
3033 | 35 | keys = [k.replace("oauth_", "") for k in keys] | ||
3034 | 36 | oauth_data = dict(zip(keys, values)) | ||
3035 | 37 | oauth_data.update({ | ||
3036 | 38 | "consumer_key": oauth_consumer_key, | ||
3037 | 39 | "consumer_secret": oauth_consumer_secret, | ||
3038 | 40 | }) | ||
3039 | 41 | return oauth_data | ||
3040 | 42 | except ImportError, e: | ||
3041 | 43 | logging.info("Can't replicate to Ubuntu One cloud without credentials." | ||
3042 | 44 | " %s", e) | ||
3043 | 45 | except gnomekeyring.NoMatchError: | ||
3044 | 46 | logging.info("This machine hasn't authorized itself to Ubuntu One; " | ||
3045 | 47 | "replication to the cloud isn't possible until it has. See " | ||
3046 | 48 | "'ubuntuone-client-applet'.") | ||
3047 | 49 | except gnomekeyring.NoKeyringDaemonError: | ||
3048 | 50 | logging.error("No keyring daemon found in this session, so we have " | ||
3049 | 51 | "no access to Ubuntu One data.") | ||
3050 | 52 | |||
3051 | 53 | def get_oauth_token(consumer): | ||
3052 | 54 | """Get the token from the keyring""" | ||
3053 | 55 | import gobject | ||
3054 | 56 | gobject.set_application_name("desktopcouch replication to Ubuntu One") | ||
3055 | 57 | try: | ||
3056 | 58 | items = gnomekeyring.find_items_sync( | ||
3057 | 59 | gnomekeyring.ITEM_GENERIC_SECRET, | ||
3058 | 60 | {'ubuntuone-realm': "https://one.ubuntu.com", | ||
3059 | 61 | 'oauth-consumer-key': consumer.key}) | ||
3060 | 62 | except gnomekeyring.NoMatchError: | ||
3061 | 63 | logging.info("No o.u.c key. Maybe there's uo.c key?") | ||
3062 | 64 | items = gnomekeyring.find_items_sync( | ||
3063 | 65 | gnomekeyring.ITEM_GENERIC_SECRET, | ||
3064 | 66 | {'ubuntuone-realm': "https://ubuntuone.com", | ||
3065 | 67 | 'oauth-consumer-key': consumer.key}) | ||
3066 | 68 | if len(items): | ||
3067 | 69 | return oauth.OAuthToken.from_string(items[0].secret) | ||
3068 | 70 | |||
3069 | 71 | def get_oauth_request_header(consumer, access_token, http_url): | ||
3070 | 72 | """Get an oauth request header given the token and the url""" | ||
3071 | 73 | signature_method = oauth.OAuthSignatureMethod_PLAINTEXT() | ||
3072 | 74 | assert http_url.startswith("https") | ||
3073 | 75 | oauth_request = oauth.OAuthRequest.from_consumer_and_token( | ||
3074 | 76 | http_url=http_url, | ||
3075 | 77 | http_method="GET", | ||
3076 | 78 | oauth_consumer=consumer, | ||
3077 | 79 | token=access_token) | ||
3078 | 80 | oauth_request.sign_request(signature_method, consumer, access_token) | ||
3079 | 81 | return oauth_request.to_header() | ||
3080 | 82 | |||
3081 | 83 | |||
3082 | 84 | class PrefixGetter(): | ||
3083 | 85 | def __init__(self): | ||
3084 | 86 | self.str = None | ||
3085 | 87 | self.oauth_header = None | ||
3086 | 88 | |||
3087 | 89 | def __str__(self): | ||
3088 | 90 | if self.str is not None: | ||
3089 | 91 | return self.str | ||
3090 | 92 | |||
3091 | 93 | url = "https://one.ubuntu.com/api/account/" | ||
3092 | 94 | if self.oauth_header is None: | ||
3093 | 95 | consumer = oauth.OAuthConsumer(oauth_consumer_key, | ||
3094 | 96 | oauth_consumer_secret) | ||
3095 | 97 | try: | ||
3096 | 98 | access_token = get_oauth_token(consumer) | ||
3097 | 99 | except gnomekeyring.NoKeyringDaemonError: | ||
3098 | 100 | logging.info("No keyring daemon is running for this session.") | ||
3099 | 101 | raise ValueError("No keyring access") | ||
3100 | 102 | if not access_token: | ||
3101 | 103 | logging.info("Could not get access token from keyring") | ||
3102 | 104 | raise ValueError("No keyring access") | ||
3103 | 105 | self.oauth_header = get_oauth_request_header(consumer, access_token, url) | ||
3104 | 106 | |||
3105 | 107 | client = httplib2.Http() | ||
3106 | 108 | resp, content = client.request(url, "GET", headers=self.oauth_header) | ||
3107 | 109 | if resp['status'] == "200": | ||
3108 | 110 | document = simplejson.loads(content) | ||
3109 | 111 | if "couchdb_root" not in document: | ||
3110 | 112 | raise ValueError("couchdb_root not found in %s" % (document,)) | ||
3111 | 113 | self.str = document["couchdb_root"] | ||
3112 | 114 | else: | ||
3113 | 115 | logging.error("Couldn't talk to %r. Got HTTP %s", url, resp['status']) | ||
3114 | 116 | raise ValueError("HTTP %s for %r" % (resp['status'], url)) | ||
3115 | 117 | |||
3116 | 118 | return self.str | ||
3117 | 119 | |||
3118 | 120 | # Access to this as a string fires off functions. | ||
3119 | 121 | db_name_prefix = PrefixGetter() | ||
3120 | 122 | |||
3121 | 123 | if __name__ == "__main__": | ||
3122 | 124 | logging.basicConfig(level=logging.DEBUG, format="%(message)s") | ||
3123 | 125 | print str(db_name_prefix) | ||
3124 | 126 | 0 | ||
3125 | === added file 'po/desktopcouch.pot' | |||
3126 | --- po/desktopcouch.pot 1970-01-01 00:00:00 +0000 | |||
3127 | +++ po/desktopcouch.pot 2009-10-12 14:29:10 +0000 | |||
3128 | @@ -0,0 +1,102 @@ | |||
3129 | 1 | # Copyright (C) 2009 Canonical Ltd. | ||
3130 | 2 | # This file is distributed under the same license as the desktopcouch package. | ||
3131 | 3 | # Ken VanDine <ken.vandine@canonical.com>, 2009. | ||
3132 | 4 | # | ||
3133 | 5 | #, fuzzy | ||
3134 | 6 | msgid "" | ||
3135 | 7 | msgstr "" | ||
3136 | 8 | "Project-Id-Version: PACKAGE VERSION\n" | ||
3137 | 9 | "Report-Msgid-Bugs-To: \n" | ||
3138 | 10 | "POT-Creation-Date: 2009-07-27 15:06-0400\n" | ||
3139 | 11 | "PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n" | ||
3140 | 12 | "Last-Translator: FULL NAME <EMAIL@ADDRESS>\n" | ||
3141 | 13 | "Language-Team: LANGUAGE <LL@li.org>\n" | ||
3142 | 14 | "MIME-Version: 1.0\n" | ||
3143 | 15 | "Content-Type: text/plain; charset=CHARSET\n" | ||
3144 | 16 | "Content-Transfer-Encoding: 8bit\n" | ||
3145 | 17 | |||
3146 | 18 | #: ../desktopcouch-pair.desktop.in.h:1 ../bin/desktopcouch-pair.py:592 | ||
3147 | 19 | msgid "CouchDB Pairing Tool" | ||
3148 | 20 | msgstr "" | ||
3149 | 21 | |||
3150 | 22 | #: ../desktopcouch-pair.desktop.in.h:2 | ||
3151 | 23 | msgid "Utility for pairing Desktop CouchDB" | ||
3152 | 24 | msgstr "" | ||
3153 | 25 | |||
3154 | 26 | #: ../bin/desktopcouch-pair.py:153 | ||
3155 | 27 | #, python-format | ||
3156 | 28 | msgid "Inviting %s to pair for CouchDB Pairing" | ||
3157 | 29 | msgstr "" | ||
3158 | 30 | |||
3159 | 31 | #: ../bin/desktopcouch-pair.py:167 | ||
3160 | 32 | #, python-format | ||
3161 | 33 | msgid "We're inviting %s to pair with\n" | ||
3162 | 34 | msgstr "" | ||
3163 | 35 | |||
3164 | 36 | #: ../bin/desktopcouch-pair.py:223 | ||
3165 | 37 | msgid "Accepting Invitation" | ||
3166 | 38 | msgstr "" | ||
3167 | 39 | |||
3168 | 40 | #: ../bin/desktopcouch-pair.py:232 | ||
3169 | 41 | #, python-format | ||
3170 | 42 | msgid "To verify your pairing with %s, enter its secret." | ||
3171 | 43 | msgstr "" | ||
3172 | 44 | |||
3173 | 45 | #: ../bin/desktopcouch-pair.py:260 | ||
3174 | 46 | msgid "Verify and connect" | ||
3175 | 47 | msgstr "" | ||
3176 | 48 | |||
3177 | 49 | #: ../bin/desktopcouch-pair.py:355 | ||
3178 | 50 | msgid "Waiting for CouchDB Pairing Invitations" | ||
3179 | 51 | msgstr "" | ||
3180 | 52 | |||
3181 | 53 | #: ../bin/desktopcouch-pair.py:376 | ||
3182 | 54 | msgid "Add 60 seconds" | ||
3183 | 55 | msgstr "" | ||
3184 | 56 | |||
3185 | 57 | #: ../bin/desktopcouch-pair.py:390 | ||
3186 | 58 | msgid "We're listening for invitations! From another\n" | ||
3187 | 59 | msgstr "" | ||
3188 | 60 | |||
3189 | 61 | #: ../bin/desktopcouch-pair.py:414 | ||
3190 | 62 | #, python-format | ||
3191 | 63 | msgid "%d seconds remaining" | ||
3192 | 64 | msgstr "" | ||
3193 | 65 | |||
3194 | 66 | #. pylint: disable-msg=W0201 | ||
3195 | 67 | #: ../bin/desktopcouch-pair.py:451 ../bin/desktopcouch-pair.py:452 | ||
3196 | 68 | msgid "service name" | ||
3197 | 69 | msgstr "" | ||
3198 | 70 | |||
3199 | 71 | #: ../bin/desktopcouch-pair.py:457 | ||
3200 | 72 | msgid "Pick a listening host to invite it to pair with us." | ||
3201 | 73 | msgstr "" | ||
3202 | 74 | |||
3203 | 75 | #: ../bin/desktopcouch-pair.py:560 | ||
3204 | 76 | msgid "Add this host to the list for others to see?" | ||
3205 | 77 | msgstr "" | ||
3206 | 78 | |||
3207 | 79 | #: ../bin/desktopcouch-pair.py:564 | ||
3208 | 80 | msgid "Listen for invitations" | ||
3209 | 81 | msgstr "" | ||
3210 | 82 | |||
3211 | 83 | #: ../bin/desktopcouch-pair.py:576 | ||
3212 | 84 | msgid "I also know of CouchDB sessions here. Pick one " | ||
3213 | 85 | msgstr "" | ||
3214 | 86 | |||
3215 | 87 | #: ../bin/desktopcouch-pair.py:600 | ||
3216 | 88 | msgid "Copyright 2009 Canonical" | ||
3217 | 89 | msgstr "" | ||
3218 | 90 | |||
3219 | 91 | #. Some kind of two-phase commit would be nice here, before we say | ||
3220 | 92 | #. successful. | ||
3221 | 93 | #. couchdb_io.replicate_to(...) | ||
3222 | 94 | #: ../bin/desktopcouch-pair.py:620 | ||
3223 | 95 | #, python-format | ||
3224 | 96 | msgid "Paired with %(host)s" | ||
3225 | 97 | msgstr "" | ||
3226 | 98 | |||
3227 | 99 | #: ../bin/desktopcouch-pair.py:625 | ||
3228 | 100 | #, python-format | ||
3229 | 101 | msgid "Successfully paired with %(host)s %(info)s." | ||
3230 | 102 | msgstr "" | ||
3231 | 0 | 103 | ||
3232 | === modified file 'setup.cfg' | |||
3233 | --- setup.cfg 2009-09-23 14:22:38 +0000 | |||
3234 | +++ setup.cfg 2009-10-12 14:29:10 +0000 | |||
3235 | @@ -1,13 +1,13 @@ | |||
3236 | 1 | [build_i18n] | ||
3237 | 2 | domain = desktopcouch | ||
3238 | 3 | desktop_files = [("share/applications", ("desktopcouch-pair.desktop.in",))] | ||
3239 | 4 | |||
3240 | 5 | [egg_info] | 1 | [egg_info] |
3241 | 6 | tag_build = | 2 | tag_build = |
3242 | 7 | tag_date = 0 | 3 | tag_date = 0 |
3243 | 8 | tag_svn_revision = 0 | 4 | tag_svn_revision = 0 |
3244 | 9 | 5 | ||
3245 | 10 | [build] | 6 | [build] |
3248 | 11 | i18n = True | 7 | i18n=True |
3249 | 12 | icons = True | 8 | icons=True |
3250 | 9 | |||
3251 | 10 | [build_i18n] | ||
3252 | 11 | domain=desktopcouch | ||
3253 | 12 | desktop_files=[("share/applications", ("desktopcouch-pair.desktop.in",))] | ||
3254 | 13 | 13 | ||
3255 | 14 | 14 | ||
3256 | === modified file 'setup.py' | |||
3257 | --- setup.py 2009-09-28 12:06:08 +0000 | |||
3258 | +++ setup.py 2009-10-12 14:29:10 +0000 | |||
3259 | @@ -22,7 +22,7 @@ | |||
3260 | 22 | 22 | ||
3261 | 23 | setup( | 23 | setup( |
3262 | 24 | name='desktopcouch', | 24 | name='desktopcouch', |
3264 | 25 | version='0.4.2', | 25 | version='0.4.4', |
3265 | 26 | description='A Desktop CouchDB instance.', | 26 | description='A Desktop CouchDB instance.', |
3266 | 27 | url='https://launchpad.net/desktopcouch', | 27 | url='https://launchpad.net/desktopcouch', |
3267 | 28 | license='LGPL-3', | 28 | license='LGPL-3', |
3268 | @@ -32,11 +32,13 @@ | |||
3269 | 32 | scripts=['bin/desktopcouch-pair'], | 32 | scripts=['bin/desktopcouch-pair'], |
3270 | 33 | data_files = [('/usr/lib/desktopcouch/', ['bin/desktopcouch-service', | 33 | data_files = [('/usr/lib/desktopcouch/', ['bin/desktopcouch-service', |
3271 | 34 | 'bin/desktopcouch-stop']), | 34 | 'bin/desktopcouch-stop']), |
3272 | 35 | # Be sure all additions are reflected in MANIFEST.in ! | ||
3273 | 35 | ('/usr/share/doc/python-desktopcouch-records/api/', | 36 | ('/usr/share/doc/python-desktopcouch-records/api/', |
3276 | 36 | ['desktopcouch/records/doc/records.txt']), | 37 | ['desktopcouch/records/doc/records.txt', |
3277 | 37 | # System-level XDG_CONFIG_DIRS folder | 38 | 'desktopcouch/records/doc/field_registry.txt', |
3278 | 39 | 'desktopcouch/contacts/schema.txt']), | ||
3279 | 38 | ('/etc/xdg/desktop-couch/', | 40 | ('/etc/xdg/desktop-couch/', |
3281 | 39 | ['config/desktop-couch/compulsory-auth.ini']), | 41 | ['config/desktop-couch/compulsory-auth.ini']), |
3282 | 40 | ('/usr/share/desktopcouch/', ['data/couchdb.tmpl']), | 42 | ('/usr/share/desktopcouch/', ['data/couchdb.tmpl']), |
3283 | 41 | ('/usr/share/dbus-1/services/', ['org.desktopcouch.CouchDB.service']), | 43 | ('/usr/share/dbus-1/services/', ['org.desktopcouch.CouchDB.service']), |
3284 | 42 | ('share/man/man1/', ['docs/man/desktopcouch-pair.1'])], | 44 | ('share/man/man1/', ['docs/man/desktopcouch-pair.1'])], |
The upstream tarball seems to be incomplete as discussed on IRC.
Thanks,
James