Merge lp:~diegosarmentero/ubuntuone-client/farewell-u1 into lp:ubuntuone-client
- farewell-u1
- Merge into trunk
Status: | Merged | ||||
---|---|---|---|---|---|
Approved by: | dobey | ||||
Approved revision: | 1411 | ||||
Merged at revision: | 1404 | ||||
Proposed branch: | lp:~diegosarmentero/ubuntuone-client/farewell-u1 | ||||
Merge into: | lp:ubuntuone-client | ||||
Diff against target: |
252 lines (+74/-58) 3 files modified
tests/status/test_aggregator.py (+13/-13) ubuntuone/status/aggregator.py (+8/-0) ubuntuone/syncdaemon/main.py (+53/-45) |
||||
To merge this branch: | bzr merge lp:~diegosarmentero/ubuntuone-client/farewell-u1 | ||||
Related bugs: |
|
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
dobey (community) | Approve | ||
Manuel de la Peña (community) | Approve | ||
Review via email: mp+214519@code.launchpad.net |
Commit message
- Show a message when the client is started indicating that the service will be suspended on June 1st.
- After June 1st, don't contact the server.
Description of the change
Farewell Ubuntu One File Service
Manuel de la Peña (mandel) : | # |
Ubuntu One Auto Pilot (otto-pilot) wrote : | # |
dobey (dobey) wrote : | # |
88 + if datetime.
So on June 2 and every day not June 1 2014, we'll start trying to connect again? And pop the notification every time syncdaemon starts up?
Do we really need to pop the notification *every* time syncdaemon starts up? We've already sent e-mails to everyone, made a blog post, and it's been picked up on several news sites. On June 1, all the existing connections will drop and it will just fail to connect anyway.
dobey (dobey) : | # |
- 1407. By Diego Sarmentero
-
fixing date comparison
Diego Sarmentero (diegosarmentero) wrote : | # |
> 88 + if datetime.
>
> So on June 2 and every day not June 1 2014, we'll start trying to connect
> again? And pop the notification every time syncdaemon starts up?
>
> Do we really need to pop the notification *every* time syncdaemon starts up?
> We've already sent e-mails to everyone, made a blog post, and it's been picked
> up on several news sites. On June 1, all the existing connections will drop
> and it will just fail to connect anyway.
Fixed
- 1408. By Diego Sarmentero
-
considering same date too
dobey (dobey) wrote : | # |
This branch seems to be introducing the test failure. I built the current package in pbuilder for saucy, and it built just fine. After adding this patch to create a new package to upload for SRU, and attempting to build it, I am getting the same test failures on saucy.
dobey (dobey) wrote : | # |
88 + if datetime.
Now the check is backwards. It should be today <= end.
- 1409. By Diego Sarmentero
-
fix date comparison
- 1410. By Diego Sarmentero
-
disable on june 1st
Diego Sarmentero (diegosarmentero) wrote : | # |
> 88 + if datetime.
>
> Now the check is backwards. It should be today <= end.
Fixed.
But the tests from trunk are failing for me in the same way here.
dobey (dobey) wrote : | # |
The tests are failing because this branch causes an extra notification to pop, so all the expectations are wrong in the tests.
I've proposed https:/
- 1411. By Diego Sarmentero
-
merging lp:~dobey/ubuntuone-client/farewell-u1 branch to fix tests
Diego Sarmentero (diegosarmentero) wrote : | # |
> The tests are failing because this branch causes an extra notification to pop,
> so all the expectations are wrong in the tests.
>
> I've proposed https:/
> client/
> in, the tests should pass again. It works for me in the package.
Thanks, done!
dobey (dobey) : | # |
Preview Diff
1 | === modified file 'tests/status/test_aggregator.py' | |||
2 | --- tests/status/test_aggregator.py 2012-11-15 13:23:43 +0000 | |||
3 | +++ tests/status/test_aggregator.py 2014-04-11 14:08:45 +0000 | |||
4 | @@ -943,7 +943,7 @@ | |||
5 | 943 | self.listener.handle_AQ_CHANGE_PUBLIC_ACCESS_OK(share_id, node_id, | 943 | self.listener.handle_AQ_CHANGE_PUBLIC_ACCESS_OK(share_id, node_id, |
6 | 944 | is_public, public_url) | 944 | is_public, public_url) |
7 | 945 | self.assertEqual( | 945 | self.assertEqual( |
9 | 946 | 1, len(self.status_frontend.notification.notifications_shown)) | 946 | 2, len(self.status_frontend.notification.notifications_shown)) |
10 | 947 | 947 | ||
11 | 948 | def test_file_unpublished(self): | 948 | def test_file_unpublished(self): |
12 | 949 | """A file unpublished event is processed.""" | 949 | """A file unpublished event is processed.""" |
13 | @@ -955,7 +955,7 @@ | |||
14 | 955 | self.listener.handle_AQ_CHANGE_PUBLIC_ACCESS_OK(share_id, node_id, | 955 | self.listener.handle_AQ_CHANGE_PUBLIC_ACCESS_OK(share_id, node_id, |
15 | 956 | is_public, public_url) | 956 | is_public, public_url) |
16 | 957 | self.assertEqual( | 957 | self.assertEqual( |
18 | 958 | 1, len(self.status_frontend.notification.notifications_shown)) | 958 | 2, len(self.status_frontend.notification.notifications_shown)) |
19 | 959 | 959 | ||
20 | 960 | def test_download_started(self): | 960 | def test_download_started(self): |
21 | 961 | """A download was added to the queue.""" | 961 | """A download was added to the queue.""" |
22 | @@ -1053,7 +1053,7 @@ | |||
23 | 1053 | self.fakevm.volumes[SHARE_ID] = share | 1053 | self.fakevm.volumes[SHARE_ID] = share |
24 | 1054 | self.listener.handle_VM_SHARE_CREATED(SHARE_ID) | 1054 | self.listener.handle_VM_SHARE_CREATED(SHARE_ID) |
25 | 1055 | self.assertEqual( | 1055 | self.assertEqual( |
27 | 1056 | 1, len(self.status_frontend.notification.notifications_shown)) | 1056 | 2, len(self.status_frontend.notification.notifications_shown)) |
28 | 1057 | 1057 | ||
29 | 1058 | def test_already_subscribed_new_udf_available(self): | 1058 | def test_already_subscribed_new_udf_available(self): |
30 | 1059 | """A new udf that was already subscribed.""" | 1059 | """A new udf that was already subscribed.""" |
31 | @@ -1061,14 +1061,14 @@ | |||
32 | 1061 | udf.subscribed = True | 1061 | udf.subscribed = True |
33 | 1062 | self.listener.handle_VM_UDF_CREATED(udf) | 1062 | self.listener.handle_VM_UDF_CREATED(udf) |
34 | 1063 | self.assertEqual( | 1063 | self.assertEqual( |
36 | 1064 | 0, len(self.status_frontend.notification.notifications_shown)) | 1064 | 1, len(self.status_frontend.notification.notifications_shown)) |
37 | 1065 | 1065 | ||
38 | 1066 | def test_new_udf_available(self): | 1066 | def test_new_udf_available(self): |
39 | 1067 | """A new udf is available for subscription.""" | 1067 | """A new udf is available for subscription.""" |
40 | 1068 | udf = UDF() | 1068 | udf = UDF() |
41 | 1069 | self.listener.handle_VM_UDF_CREATED(udf) | 1069 | self.listener.handle_VM_UDF_CREATED(udf) |
42 | 1070 | self.assertEqual( | 1070 | self.assertEqual( |
44 | 1071 | 1, len(self.status_frontend.notification.notifications_shown)) | 1071 | 2, len(self.status_frontend.notification.notifications_shown)) |
45 | 1072 | 1072 | ||
46 | 1073 | def test_two_new_udfs_available(self): | 1073 | def test_two_new_udfs_available(self): |
47 | 1074 | """A new udf is available for subscription.""" | 1074 | """A new udf is available for subscription.""" |
48 | @@ -1077,14 +1077,14 @@ | |||
49 | 1077 | udf2 = UDF() | 1077 | udf2 = UDF() |
50 | 1078 | self.listener.handle_VM_UDF_CREATED(udf2) | 1078 | self.listener.handle_VM_UDF_CREATED(udf2) |
51 | 1079 | self.assertEqual( | 1079 | self.assertEqual( |
53 | 1080 | 2, len(self.status_frontend.notification.notifications_shown)) | 1080 | 3, len(self.status_frontend.notification.notifications_shown)) |
54 | 1081 | 1081 | ||
55 | 1082 | def test_server_connection_lost(self): | 1082 | def test_server_connection_lost(self): |
56 | 1083 | """The client connected to the server.""" | 1083 | """The client connected to the server.""" |
57 | 1084 | self.status_frontend.aggregator.connected = True | 1084 | self.status_frontend.aggregator.connected = True |
58 | 1085 | self.listener.handle_SYS_CONNECTION_LOST() | 1085 | self.listener.handle_SYS_CONNECTION_LOST() |
59 | 1086 | self.assertEqual( | 1086 | self.assertEqual( |
61 | 1087 | 0, len(self.status_frontend.notification.notifications_shown)) | 1087 | 1, len(self.status_frontend.notification.notifications_shown)) |
62 | 1088 | self.assertFalse(self.status_frontend.aggregator.connected) | 1088 | self.assertFalse(self.status_frontend.aggregator.connected) |
63 | 1089 | 1089 | ||
64 | 1090 | def test_server_connection_made(self): | 1090 | def test_server_connection_made(self): |
65 | @@ -1092,7 +1092,7 @@ | |||
66 | 1092 | self.status_frontend.aggregator.connected = False | 1092 | self.status_frontend.aggregator.connected = False |
67 | 1093 | self.listener.handle_SYS_CONNECTION_MADE() | 1093 | self.listener.handle_SYS_CONNECTION_MADE() |
68 | 1094 | self.assertEqual( | 1094 | self.assertEqual( |
70 | 1095 | 0, len(self.status_frontend.notification.notifications_shown)) | 1095 | 1, len(self.status_frontend.notification.notifications_shown)) |
71 | 1096 | self.assertTrue(self.status_frontend.aggregator.connected) | 1096 | self.assertTrue(self.status_frontend.aggregator.connected) |
72 | 1097 | 1097 | ||
73 | 1098 | def test_set_show_all_notifications(self): | 1098 | def test_set_show_all_notifications(self): |
74 | @@ -1117,7 +1117,7 @@ | |||
75 | 1117 | self.listener.handle_SYS_QUOTA_EXCEEDED( | 1117 | self.listener.handle_SYS_QUOTA_EXCEEDED( |
76 | 1118 | volume_id=UDF_ID, free_bytes=0) | 1118 | volume_id=UDF_ID, free_bytes=0) |
77 | 1119 | self.assertEqual( | 1119 | self.assertEqual( |
79 | 1120 | 0, len(self.status_frontend.notification.notifications_shown)) | 1120 | 1, len(self.status_frontend.notification.notifications_shown)) |
80 | 1121 | mocker.restore() | 1121 | mocker.restore() |
81 | 1122 | mocker.verify() | 1122 | mocker.verify() |
82 | 1123 | 1123 | ||
83 | @@ -1138,7 +1138,7 @@ | |||
84 | 1138 | self.listener.handle_SYS_QUOTA_EXCEEDED( | 1138 | self.listener.handle_SYS_QUOTA_EXCEEDED( |
85 | 1139 | volume_id=ROOT_ID, free_bytes=0) | 1139 | volume_id=ROOT_ID, free_bytes=0) |
86 | 1140 | self.assertEqual( | 1140 | self.assertEqual( |
88 | 1141 | 0, len(self.status_frontend.notification.notifications_shown)) | 1141 | 1, len(self.status_frontend.notification.notifications_shown)) |
89 | 1142 | mocker.restore() | 1142 | mocker.restore() |
90 | 1143 | mocker.verify() | 1143 | mocker.verify() |
91 | 1144 | 1144 | ||
92 | @@ -1162,15 +1162,15 @@ | |||
93 | 1162 | self.fakevm.volumes[SHARE_ID] = share | 1162 | self.fakevm.volumes[SHARE_ID] = share |
94 | 1163 | self.listener.handle_SYS_QUOTA_EXCEEDED(SHARE_ID, BYTES) | 1163 | self.listener.handle_SYS_QUOTA_EXCEEDED(SHARE_ID, BYTES) |
95 | 1164 | self.assertEqual( | 1164 | self.assertEqual( |
97 | 1165 | 1, len(self.status_frontend.notification.notifications_shown)) | 1165 | 2, len(self.status_frontend.notification.notifications_shown)) |
98 | 1166 | self.listener.handle_SYS_QUOTA_EXCEEDED(SHARE_ID, BYTES) | 1166 | self.listener.handle_SYS_QUOTA_EXCEEDED(SHARE_ID, BYTES) |
99 | 1167 | self.listener.handle_SYS_QUOTA_EXCEEDED(SHARE_ID, BYTES) | 1167 | self.listener.handle_SYS_QUOTA_EXCEEDED(SHARE_ID, BYTES) |
100 | 1168 | self.assertEqual( | 1168 | self.assertEqual( |
102 | 1169 | 1, len(self.status_frontend.notification.notifications_shown)) | 1169 | 2, len(self.status_frontend.notification.notifications_shown)) |
103 | 1170 | self.status_frontend.aggregator.clock.advance(aggregator.ONE_DAY + 1) | 1170 | self.status_frontend.aggregator.clock.advance(aggregator.ONE_DAY + 1) |
104 | 1171 | self.listener.handle_SYS_QUOTA_EXCEEDED(SHARE_ID, BYTES) | 1171 | self.listener.handle_SYS_QUOTA_EXCEEDED(SHARE_ID, BYTES) |
105 | 1172 | self.assertEqual( | 1172 | self.assertEqual( |
107 | 1173 | 2, len(self.status_frontend.notification.notifications_shown)) | 1173 | 3, len(self.status_frontend.notification.notifications_shown)) |
108 | 1174 | mocker.restore() | 1174 | mocker.restore() |
109 | 1175 | mocker.verify() | 1175 | mocker.verify() |
110 | 1176 | 1176 | ||
111 | 1177 | 1177 | ||
112 | === modified file 'ubuntuone/status/aggregator.py' | |||
113 | --- ubuntuone/status/aggregator.py 2013-03-20 21:33:53 +0000 | |||
114 | +++ ubuntuone/status/aggregator.py 2014-04-11 14:08:45 +0000 | |||
115 | @@ -51,6 +51,8 @@ | |||
116 | 51 | Q_ = lambda string: gettext.dgettext(GETTEXT_PACKAGE, string) | 51 | Q_ = lambda string: gettext.dgettext(GETTEXT_PACKAGE, string) |
117 | 52 | 52 | ||
118 | 53 | UBUNTUONE_TITLE = Q_("Ubuntu One") | 53 | UBUNTUONE_TITLE = Q_("Ubuntu One") |
119 | 54 | UBUNTUONE_END = Q_("Ubuntu One file services will be " | ||
120 | 55 | "shutting down on June 1st, 2014.\nThanks for your support.") | ||
121 | 54 | NEW_UDFS_SENDER = Q_("New cloud folder(s) available") | 56 | NEW_UDFS_SENDER = Q_("New cloud folder(s) available") |
122 | 55 | FINAL_COMPLETED = Q_("File synchronization completed.") | 57 | FINAL_COMPLETED = Q_("File synchronization completed.") |
123 | 56 | 58 | ||
124 | @@ -827,6 +829,12 @@ | |||
125 | 827 | self.syncdaemon_service = service | 829 | self.syncdaemon_service = service |
126 | 828 | self.sync_menu = None | 830 | self.sync_menu = None |
127 | 829 | self.start_sync_menu() | 831 | self.start_sync_menu() |
128 | 832 | self.farewell_ubuntuone_sync() | ||
129 | 833 | |||
130 | 834 | def farewell_ubuntuone_sync(self): | ||
131 | 835 | """Show notification about the upcoming end of UbuntuOne sync.""" | ||
132 | 836 | self.notification.send_notification( | ||
133 | 837 | UBUNTUONE_TITLE, UBUNTUONE_END) | ||
134 | 830 | 838 | ||
135 | 831 | def start_sync_menu(self): | 839 | def start_sync_menu(self): |
136 | 832 | """Create the sync menu and register the progress listener.""" | 840 | """Create the sync menu and register the progress listener.""" |
137 | 833 | 841 | ||
138 | === modified file 'ubuntuone/syncdaemon/main.py' | |||
139 | --- ubuntuone/syncdaemon/main.py 2013-02-04 16:04:19 +0000 | |||
140 | +++ ubuntuone/syncdaemon/main.py 2014-04-11 14:08:45 +0000 | |||
141 | @@ -31,6 +31,7 @@ | |||
142 | 31 | import logging | 31 | import logging |
143 | 32 | import os | 32 | import os |
144 | 33 | import sys | 33 | import sys |
145 | 34 | import datetime | ||
146 | 34 | 35 | ||
147 | 35 | from dirspec.utils import user_home | 36 | from dirspec.utils import user_home |
148 | 36 | from twisted.internet import defer, reactor, task | 37 | from twisted.internet import defer, reactor, task |
149 | @@ -106,51 +107,58 @@ | |||
150 | 106 | if not throttling_enabled: | 107 | if not throttling_enabled: |
151 | 107 | throttling_enabled = user_config.get_throttling() | 108 | throttling_enabled = user_config.get_throttling() |
152 | 108 | 109 | ||
198 | 109 | self.logger.info("Starting Ubuntu One client version %s", | 110 | end_date = datetime.date(2014, 6, 1) |
199 | 110 | clientdefs.VERSION) | 111 | if datetime.date.today() < end_date: |
200 | 111 | self.logger.info("Using %r as root dir", self.root_dir) | 112 | self.logger.info("Starting Ubuntu One client version %s", |
201 | 112 | self.logger.info("Using %r as data dir", self.data_dir) | 113 | clientdefs.VERSION) |
202 | 113 | self.logger.info("Using %r as shares root dir", self.shares_dir) | 114 | self.logger.info("Using %r as root dir", self.root_dir) |
203 | 114 | self.db = tritcask.Tritcask(tritcask_dir) | 115 | self.logger.info("Using %r as data dir", self.data_dir) |
204 | 115 | self.vm = volume_manager.VolumeManager(self) | 116 | self.logger.info("Using %r as shares root dir", self.shares_dir) |
205 | 116 | self.fs = filesystem_manager.FileSystemManager( | 117 | self.db = tritcask.Tritcask(tritcask_dir) |
206 | 117 | data_dir, partials_dir, self.vm, self.db) | 118 | self.vm = volume_manager.VolumeManager(self) |
207 | 118 | self.event_q = event_queue.EventQueue(self.fs, ignore_files, | 119 | self.fs = filesystem_manager.FileSystemManager( |
208 | 119 | monitor_class=monitor_class) | 120 | data_dir, partials_dir, self.vm, self.db) |
209 | 120 | self.fs.register_eq(self.event_q) | 121 | self.event_q = event_queue.EventQueue( |
210 | 121 | 122 | self.fs, ignore_files, monitor_class=monitor_class) | |
211 | 122 | # subscribe VM to EQ, to be unsubscribed in shutdown | 123 | self.fs.register_eq(self.event_q) |
212 | 123 | self.event_q.subscribe(self.vm) | 124 | |
213 | 124 | self.vm.init_root() | 125 | # subscribe VM to EQ, to be unsubscribed in shutdown |
214 | 125 | 126 | self.event_q.subscribe(self.vm) | |
215 | 126 | # we don't have the oauth tokens yet, we 'll get them later | 127 | self.vm.init_root() |
216 | 127 | self.action_q = action_queue.ActionQueue(self.event_q, self, | 128 | |
217 | 128 | host, port, | 129 | # we don't have the oauth tokens yet, we 'll get them later |
218 | 129 | dns_srv, ssl, | 130 | self.action_q = action_queue.ActionQueue(self.event_q, self, |
219 | 130 | disable_ssl_verify, | 131 | host, port, |
220 | 131 | read_limit, write_limit, | 132 | dns_srv, ssl, |
221 | 132 | throttling_enabled) | 133 | disable_ssl_verify, |
222 | 133 | self.hash_q = hash_queue.HashQueue(self.event_q) | 134 | read_limit, write_limit, |
223 | 134 | events_nanny.DownloadFinishedNanny(self.fs, self.event_q, self.hash_q) | 135 | throttling_enabled) |
224 | 135 | 136 | self.hash_q = hash_queue.HashQueue(self.event_q) | |
225 | 136 | # call StateManager after having AQ | 137 | events_nanny.DownloadFinishedNanny(self.fs, self.event_q, |
226 | 137 | self.state_manager = StateManager(self, handshake_timeout) | 138 | self.hash_q) |
227 | 138 | 139 | ||
228 | 139 | self.sync = sync.Sync(self) | 140 | # call StateManager after having AQ |
229 | 140 | self.lr = local_rescan.LocalRescan(self.vm, self.fs, | 141 | self.state_manager = StateManager(self, handshake_timeout) |
230 | 141 | self.event_q, self.action_q) | 142 | |
231 | 142 | 143 | self.sync = sync.Sync(self) | |
232 | 143 | self.external = SyncdaemonService(main=self, | 144 | self.lr = local_rescan.LocalRescan(self.vm, self.fs, |
233 | 144 | send_events=broadcast_events) | 145 | self.event_q, self.action_q) |
234 | 145 | self.external.oauth_credentials = oauth_credentials | 146 | |
235 | 146 | if user_config.get_autoconnect(): | 147 | self.external = SyncdaemonService(main=self, |
236 | 147 | self.external.connect(autoconnecting=True) | 148 | send_events=broadcast_events) |
237 | 148 | 149 | self.external.oauth_credentials = oauth_credentials | |
238 | 149 | self.status_listener = None | 150 | if user_config.get_autoconnect(): |
239 | 150 | self.start_status_listener() | 151 | self.external.connect(autoconnecting=True) |
240 | 151 | 152 | ||
241 | 152 | self.mark = task.LoopingCall(self.log_mark) | 153 | self.status_listener = None |
242 | 153 | self.mark.start(mark_interval) | 154 | self.start_status_listener() |
243 | 155 | |||
244 | 156 | self.mark = task.LoopingCall(self.log_mark) | ||
245 | 157 | self.mark.start(mark_interval) | ||
246 | 158 | else: | ||
247 | 159 | if reactor.running: | ||
248 | 160 | reactor.stop() | ||
249 | 161 | sys.exit(0) | ||
250 | 154 | 162 | ||
251 | 155 | def start_status_listener(self): | 163 | def start_status_listener(self): |
252 | 156 | """Start the status listener if it is configured to start.""" | 164 | """Start the status listener if it is configured to start.""" |
The attempt to merge lp:~diegosarmentero/ubuntuone-client/farewell-u1 into lp:ubuntuone-client failed. Below is the output from the failed tests.
*** Running test suite for tests *** linux-x86_ 64-2.7 linux-x86_ 64-2.7/ contrib linux-x86_ 64-2.7/ contrib/ testing testing/ testcase. py -> build/lib. linux-x86_ 64-2.7/ contrib/ testing testing/ __init_ _.py -> build/lib. linux-x86_ 64-2.7/ contrib/ testing linux-x86_ 64-2.7/ ubuntuone linux-x86_ 64-2.7/ ubuntuone/ platform linux-x86_ 64-2.7/ ubuntuone/ platform/ credentials platform/ credentials/ ipc_service. py -> build/lib. linux-x86_ 64-2.7/ ubuntuone/ platform/ credentials platform/ credentials/ dbus_service. py -> build/lib. linux-x86_ 64-2.7/ ubuntuone/ platform/ credentials platform/ credentials/ __init_ _.py -> build/lib. linux-x86_ 64-2.7/ ubuntuone/ platform/ credentials linux-x86_ 64-2.7/ ubuntuone/ platform/ filesystem_ notifications linux-x86_ 64-2.7/ ubuntuone/ platform/ filesystem_ notifications/ notify_ processor platform/ filesystem_ notifications/ notify_ processor/ linux.py -> build/lib. linux-x86_ 64-2.7/ ubuntuone/ platform/ filesystem_ notifications/ notify_ processor platform/ filesystem_ notifications/ notify_ processor/ common. py -> build/lib. linux-x86_ 64-2.7/ ubuntuone/ platform/ filesystem_ notifications/ notify_ processor platform/ filesystem_ notifications/ notify_ processor/ __init_ _.py -> build/lib. linux-x86_ 64-2.7/ ubuntuone/ platform/ filesystem_ notifications/ notify_ processor linux-x86_ 64-2.7/ ubuntuone/ proxy proxy/common. py -> build/lib. linux-x86_ 64-2.7/ ubuntuone/ proxy proxy/tunnel_ client. py -> build/lib. linux-x86_ 64-2.7/ ubuntuone/ proxy proxy/tunnel_ server. py -> build/lib. linux-x86_ 64-2.7/ ubuntuone/ proxy proxy/logger. py -> build/lib. linux-x86_ 64-2.7/ ubuntuone/ proxy proxy/_ _init__ .py -> build/lib. linux-x86_ 64-2.7/ ubuntuone/ proxy linux-x86_ 64-2.7/ ubuntuone/ status status/ notification. py -> build/lib. linux-x86_ 64-2.7/ ubuntuone/ status status/ logger. py -> build/lib. linux-x86_ 64-2.7/ ubuntuone/ status status/ aggregator. py -> build/lib. linux-x86_ 64-2.7/ ubuntuone/ status status/ __init_ _.py -> build/lib. linux-x86_ 64-2.7/ ubuntuone/ status linux-x86_ 64-2.7/ ubuntuone/ platform/ notification platform/ notification/ linux.py -> build/lib. linux-x86_ 64-2.7/ ubuntuone/ platform/ notification platform/ notification/ windows. py -> build/lib. linux-x86_ 64-2.7/ ubuntuone/ platform/ notification platform/ notification/ __init_ _.py -> build/lib. linux-x86_ 64-2.7/ ubuntuone/ platform/ notification linux-x86_ 64-2.7/ ubuntuone/ platform/ tools platform/ tools/linux. py -> build/lib. linux-x86_ 64-2.7/ ubuntuone/ platform/ tools platform/ tools/_ _init__ .py -> bu...
running build
running build_py
creating build
creating build/lib.
creating build/lib.
creating build/lib.
copying contrib/
copying contrib/
creating build/lib.
creating build/lib.
creating build/lib.
copying ubuntuone/
copying ubuntuone/
copying ubuntuone/
creating build/lib.
creating build/lib.
copying ubuntuone/
copying ubuntuone/
copying ubuntuone/
creating build/lib.
copying ubuntuone/
copying ubuntuone/
copying ubuntuone/
copying ubuntuone/
copying ubuntuone/
creating build/lib.
copying ubuntuone/
copying ubuntuone/
copying ubuntuone/
copying ubuntuone/
creating build/lib.
copying ubuntuone/
copying ubuntuone/
copying ubuntuone/
creating build/lib.
copying ubuntuone/
copying ubuntuone/