Merge lp:~ewanmellor/nova/xenapi-concurrency-model into lp:~hudson-openstack/nova/trunk
- xenapi-concurrency-model
- Merge into trunk
Status: | Superseded |
---|---|
Proposed branch: | lp:~ewanmellor/nova/xenapi-concurrency-model |
Merge into: | lp:~hudson-openstack/nova/trunk |
Diff against target: |
326 lines (+163/-37) 1 file modified
nova/virt/xenapi.py (+163/-37) |
To merge this branch: | bzr merge lp:~ewanmellor/nova/xenapi-concurrency-model |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Jay Pipes (community) | Approve | ||
Review via email: mp+32722@code.launchpad.net |
This proposal has been superseded by a proposal from 2010-08-17.
Commit message
Description of the change
Rework virt.xenapi's concurrency model. There were many places where we were
inadvertently blocking the reactor thread. The reworking puts all calls to
XenAPI on background threads, so that they won't block the reactor thread.
Long-lived operations (VM start, reboot, etc) are invoked asynchronously
at the XenAPI level (Async.VM.start, etc). These return a XenAPI task. We
relinquish the background thread at this point, so as not to hold threads in
the pool for too long, and use reactor.callLater to poll the task.
This combination of techniques means that we don't block the reactor thread at
all, and at the same time we don't hold lots of threads waiting for
long-running operations.
There is a FIXME in here: get_info does not conform to these new rules.
Changes are required in compute.service before we can make get_info
non-blocking.
Ewan Mellor (ewanmellor) wrote : | # |
I thought that those two were a bit loud even for debug level -- that's
two messages every .5 seconds when polling a task (in the default
configuration).
Ewan.
On Mon, Aug 16, 2010 at 06:35:56PM +0100, Jay Pipes wrote:
> Review: Approve
> Really nice work, Ewan! No criticism at all from me! Feel free to uncomment the logging.debug() output, though. :)
> --
> https:/
> You are the owner of lp:~ewanmellor/nova/xenapi-concurrency-model.
Jay Pipes (jaypipes) wrote : | # |
> I thought that those two were a bit loud even for debug level -- that's
> two messages every .5 seconds when polling a task (in the default
> configuration).
Hmm, I suppose that is a bit loud...but then again it's debugging information. Well, I approve regardless. I'd prefer to see the debug statements uncommented, but it's certainly no reason to hold up this excellent patch :)
-jay
OpenStack Infra (hudson-openstack) wrote : | # |
Attempt to merge lp:~ewanmellor/nova/xenapi-concurrency-model into lp:nova failed due to merge conflicts:
text conflict in nova/virt/xenapi.py
- 230. By Ewan Mellor
-
Merge with trunk, in particular merging with the style cleanup that caused
conflicts with this branch.
Ewan Mellor (ewanmellor) wrote : | # |
I've remerged this with trunk. The style cleanups that went in today caused
inevitable conflicts.
Ewan.
On Tue, Aug 17, 2010 at 10:33:45PM +0100, OpenStack Hudson wrote:
> Attempt to merge lp:~ewanmellor/nova/xenapi-concurrency-model into lp:nova failed due to merge conflicts:
>
> text conflict in nova/virt/xenapi.py
> --
> https:/
> You are the owner of lp:~ewanmellor/nova/xenapi-concurrency-model.
- 231. By Ewan Mellor
-
Remove whitespace to match style guide.
- 232. By Ewan Mellor
-
Move deferredToThread into utils, as suggested by termie.
Unmerged revisions
Preview Diff
1 | === modified file 'nova/virt/xenapi.py' | |||
2 | --- nova/virt/xenapi.py 2010-08-17 11:53:30 +0000 | |||
3 | +++ nova/virt/xenapi.py 2010-08-17 21:57:41 +0000 | |||
4 | @@ -16,15 +16,33 @@ | |||
5 | 16 | 16 | ||
6 | 17 | """ | 17 | """ |
7 | 18 | A connection to XenServer or Xen Cloud Platform. | 18 | A connection to XenServer or Xen Cloud Platform. |
8 | 19 | |||
9 | 20 | The concurrency model for this class is as follows: | ||
10 | 21 | |||
11 | 22 | All XenAPI calls are on a thread (using t.i.t.deferToThread, or the decorator | ||
12 | 23 | deferredToThread). They are remote calls, and so may hang for the usual | ||
13 | 24 | reasons. They should not be allowed to block the reactor thread. | ||
14 | 25 | |||
15 | 26 | All long-running XenAPI calls (VM.start, VM.reboot, etc) are called async | ||
16 | 27 | (using XenAPI.VM.async_start etc). These return a task, which can then be | ||
17 | 28 | polled for completion. Polling is handled using reactor.callLater. | ||
18 | 29 | |||
19 | 30 | This combination of techniques means that we don't block the reactor thread at | ||
20 | 31 | all, and at the same time we don't hold lots of threads waiting for | ||
21 | 32 | long-running operations. | ||
22 | 33 | |||
23 | 34 | FIXME: get_info currently doesn't conform to these rules, and will block the | ||
24 | 35 | reactor thread if the VM.get_by_name_label or VM.get_record calls block. | ||
25 | 19 | """ | 36 | """ |
26 | 20 | 37 | ||
27 | 21 | import logging | 38 | import logging |
28 | 22 | import xmlrpclib | 39 | import xmlrpclib |
29 | 23 | 40 | ||
30 | 24 | from twisted.internet import defer | 41 | from twisted.internet import defer |
31 | 42 | from twisted.internet import reactor | ||
32 | 25 | from twisted.internet import task | 43 | from twisted.internet import task |
33 | 44 | from twisted.internet.threads import deferToThread | ||
34 | 26 | 45 | ||
35 | 27 | from nova import exception | ||
36 | 28 | from nova import flags | 46 | from nova import flags |
37 | 29 | from nova import process | 47 | from nova import process |
38 | 30 | from nova.auth.manager import AuthManager | 48 | from nova.auth.manager import AuthManager |
39 | @@ -47,6 +65,11 @@ | |||
40 | 47 | None, | 65 | None, |
41 | 48 | 'Password for connection to XenServer/Xen Cloud Platform.' | 66 | 'Password for connection to XenServer/Xen Cloud Platform.' |
42 | 49 | ' Used only if connection_type=xenapi.') | 67 | ' Used only if connection_type=xenapi.') |
43 | 68 | flags.DEFINE_float('xenapi_task_poll_interval', | ||
44 | 69 | 0.5, | ||
45 | 70 | 'The interval used for polling of remote tasks ' | ||
46 | 71 | '(Async.VM.start, etc). Used only if ' | ||
47 | 72 | 'connection_type=xenapi.') | ||
48 | 50 | 73 | ||
49 | 51 | 74 | ||
50 | 52 | XENAPI_POWER_STATE = { | 75 | XENAPI_POWER_STATE = { |
51 | @@ -74,6 +97,12 @@ | |||
52 | 74 | return XenAPIConnection(url, username, password) | 97 | return XenAPIConnection(url, username, password) |
53 | 75 | 98 | ||
54 | 76 | 99 | ||
55 | 100 | def deferredToThread(f): | ||
56 | 101 | def g(*args, **kwargs): | ||
57 | 102 | return deferToThread(f, *args, **kwargs) | ||
58 | 103 | return g | ||
59 | 104 | |||
60 | 105 | |||
61 | 77 | class XenAPIConnection(object): | 106 | class XenAPIConnection(object): |
62 | 78 | def __init__(self, url, user, pw): | 107 | def __init__(self, url, user, pw): |
63 | 79 | self._conn = XenAPI.Session(url) | 108 | self._conn = XenAPI.Session(url) |
64 | @@ -84,9 +113,8 @@ | |||
65 | 84 | for vm in self._conn.xenapi.VM.get_all()] | 113 | for vm in self._conn.xenapi.VM.get_all()] |
66 | 85 | 114 | ||
67 | 86 | @defer.inlineCallbacks | 115 | @defer.inlineCallbacks |
68 | 87 | @exception.wrap_exception | ||
69 | 88 | def spawn(self, instance): | 116 | def spawn(self, instance): |
71 | 89 | vm = yield self.lookup(instance.name) | 117 | vm = yield self._lookup(instance.name) |
72 | 90 | if vm is not None: | 118 | if vm is not None: |
73 | 91 | raise Exception('Attempted to create non-unique name %s' % | 119 | raise Exception('Attempted to create non-unique name %s' % |
74 | 92 | instance.name) | 120 | instance.name) |
75 | @@ -105,21 +133,28 @@ | |||
76 | 105 | 133 | ||
77 | 106 | user = AuthManager().get_user(instance.datamodel['user_id']) | 134 | user = AuthManager().get_user(instance.datamodel['user_id']) |
78 | 107 | project = AuthManager().get_project(instance.datamodel['project_id']) | 135 | project = AuthManager().get_project(instance.datamodel['project_id']) |
80 | 108 | vdi_uuid = yield self.fetch_image( | 136 | vdi_uuid = yield self._fetch_image( |
81 | 109 | instance.datamodel['image_id'], user, project, True) | 137 | instance.datamodel['image_id'], user, project, True) |
83 | 110 | kernel = yield self.fetch_image( | 138 | kernel = yield self._fetch_image( |
84 | 111 | instance.datamodel['kernel_id'], user, project, False) | 139 | instance.datamodel['kernel_id'], user, project, False) |
86 | 112 | ramdisk = yield self.fetch_image( | 140 | ramdisk = yield self._fetch_image( |
87 | 113 | instance.datamodel['ramdisk_id'], user, project, False) | 141 | instance.datamodel['ramdisk_id'], user, project, False) |
89 | 114 | vdi_ref = yield self._conn.xenapi.VDI.get_by_uuid(vdi_uuid) | 142 | vdi_ref = yield self._call_xenapi('VDI.get_by_uuid', vdi_uuid) |
90 | 115 | 143 | ||
93 | 116 | vm_ref = yield self.create_vm(instance, kernel, ramdisk) | 144 | vm_ref = yield self._create_vm(instance, kernel, ramdisk) |
94 | 117 | yield self.create_vbd(vm_ref, vdi_ref, 0, True) | 145 | yield self._create_vbd(vm_ref, vdi_ref, 0, True) |
95 | 118 | if network_ref: | 146 | if network_ref: |
96 | 119 | yield self._create_vif(vm_ref, network_ref, mac_address) | 147 | yield self._create_vif(vm_ref, network_ref, mac_address) |
100 | 120 | yield self._conn.xenapi.VM.start(vm_ref, False, False) | 148 | logging.debug('Starting VM %s...', vm_ref) |
101 | 121 | 149 | yield self._call_xenapi('VM.start', vm_ref, False, False) | |
102 | 122 | def create_vm(self, instance, kernel, ramdisk): | 150 | logging.info('Spawning VM %s created %s.', instance.name, vm_ref) |
103 | 151 | |||
104 | 152 | |||
105 | 153 | @defer.inlineCallbacks | ||
106 | 154 | def _create_vm(self, instance, kernel, ramdisk): | ||
107 | 155 | """Create a VM record. Returns a Deferred that gives the new | ||
108 | 156 | VM reference.""" | ||
109 | 157 | |||
110 | 123 | mem = str(long(instance.datamodel['memory_kb']) * 1024) | 158 | mem = str(long(instance.datamodel['memory_kb']) * 1024) |
111 | 124 | vcpus = str(instance.datamodel['vcpus']) | 159 | vcpus = str(instance.datamodel['vcpus']) |
112 | 125 | rec = { | 160 | rec = { |
113 | @@ -152,11 +187,16 @@ | |||
114 | 152 | 'other_config': {}, | 187 | 'other_config': {}, |
115 | 153 | } | 188 | } |
116 | 154 | logging.debug('Created VM %s...', instance.name) | 189 | logging.debug('Created VM %s...', instance.name) |
118 | 155 | vm_ref = self._conn.xenapi.VM.create(rec) | 190 | vm_ref = yield self._call_xenapi('VM.create', rec) |
119 | 156 | logging.debug('Created VM %s as %s.', instance.name, vm_ref) | 191 | logging.debug('Created VM %s as %s.', instance.name, vm_ref) |
123 | 157 | return vm_ref | 192 | defer.returnValue(vm_ref) |
124 | 158 | 193 | ||
125 | 159 | def create_vbd(self, vm_ref, vdi_ref, userdevice, bootable): | 194 | |
126 | 195 | @defer.inlineCallbacks | ||
127 | 196 | def _create_vbd(self, vm_ref, vdi_ref, userdevice, bootable): | ||
128 | 197 | """Create a VBD record. Returns a Deferred that gives the new | ||
129 | 198 | VBD reference.""" | ||
130 | 199 | |||
131 | 160 | vbd_rec = {} | 200 | vbd_rec = {} |
132 | 161 | vbd_rec['VM'] = vm_ref | 201 | vbd_rec['VM'] = vm_ref |
133 | 162 | vbd_rec['VDI'] = vdi_ref | 202 | vbd_rec['VDI'] = vdi_ref |
134 | @@ -171,12 +211,17 @@ | |||
135 | 171 | vbd_rec['qos_algorithm_params'] = {} | 211 | vbd_rec['qos_algorithm_params'] = {} |
136 | 172 | vbd_rec['qos_supported_algorithms'] = [] | 212 | vbd_rec['qos_supported_algorithms'] = [] |
137 | 173 | logging.debug('Creating VBD for VM %s, VDI %s ... ', vm_ref, vdi_ref) | 213 | logging.debug('Creating VBD for VM %s, VDI %s ... ', vm_ref, vdi_ref) |
139 | 174 | vbd_ref = self._conn.xenapi.VBD.create(vbd_rec) | 214 | vbd_ref = yield self._call_xenapi('VBD.create', vbd_rec) |
140 | 175 | logging.debug('Created VBD %s for VM %s, VDI %s.', vbd_ref, vm_ref, | 215 | logging.debug('Created VBD %s for VM %s, VDI %s.', vbd_ref, vm_ref, |
141 | 176 | vdi_ref) | 216 | vdi_ref) |
144 | 177 | return vbd_ref | 217 | defer.returnValue(vbd_ref) |
145 | 178 | 218 | ||
146 | 219 | |||
147 | 220 | @defer.inlineCallbacks | ||
148 | 179 | def _create_vif(self, vm_ref, network_ref, mac_address): | 221 | def _create_vif(self, vm_ref, network_ref, mac_address): |
149 | 222 | """Create a VIF record. Returns a Deferred that gives the new | ||
150 | 223 | VIF reference.""" | ||
151 | 224 | |||
152 | 180 | vif_rec = {} | 225 | vif_rec = {} |
153 | 181 | vif_rec['device'] = '0' | 226 | vif_rec['device'] = '0' |
154 | 182 | vif_rec['network']= network_ref | 227 | vif_rec['network']= network_ref |
155 | @@ -188,25 +233,31 @@ | |||
156 | 188 | vif_rec['qos_algorithm_params'] = {} | 233 | vif_rec['qos_algorithm_params'] = {} |
157 | 189 | logging.debug('Creating VIF for VM %s, network %s ... ', vm_ref, | 234 | logging.debug('Creating VIF for VM %s, network %s ... ', vm_ref, |
158 | 190 | network_ref) | 235 | network_ref) |
160 | 191 | vif_ref = self._conn.xenapi.VIF.create(vif_rec) | 236 | vif_ref = yield self._call_xenapi('VIF.create', vif_rec) |
161 | 192 | logging.debug('Created VIF %s for VM %s, network %s.', vif_ref, | 237 | logging.debug('Created VIF %s for VM %s, network %s.', vif_ref, |
162 | 193 | vm_ref, network_ref) | 238 | vm_ref, network_ref) |
165 | 194 | return vif_ref | 239 | defer.returnValue(vif_ref) |
166 | 195 | 240 | ||
167 | 241 | |||
168 | 242 | @defer.inlineCallbacks | ||
169 | 196 | def _find_network_with_bridge(self, bridge): | 243 | def _find_network_with_bridge(self, bridge): |
170 | 197 | expr = 'field "bridge" = "%s"' % bridge | 244 | expr = 'field "bridge" = "%s"' % bridge |
172 | 198 | networks = self._conn.xenapi.network.get_all_records_where(expr) | 245 | networks = yield self._call_xenapi('network.get_all_records_where', |
173 | 246 | expr) | ||
174 | 199 | if len(networks) == 1: | 247 | if len(networks) == 1: |
176 | 200 | return networks.keys()[0] | 248 | defer.returnValue(networks.keys()[0]) |
177 | 201 | elif len(networks) > 1: | 249 | elif len(networks) > 1: |
178 | 202 | raise Exception('Found non-unique network for bridge %s' % bridge) | 250 | raise Exception('Found non-unique network for bridge %s' % bridge) |
179 | 203 | else: | 251 | else: |
180 | 204 | raise Exception('Found no network for bridge %s' % bridge) | 252 | raise Exception('Found no network for bridge %s' % bridge) |
181 | 205 | 253 | ||
183 | 206 | def fetch_image(self, image, user, project, use_sr): | 254 | |
184 | 255 | @defer.inlineCallbacks | ||
185 | 256 | def _fetch_image(self, image, user, project, use_sr): | ||
186 | 207 | """use_sr: True to put the image as a VDI in an SR, False to place | 257 | """use_sr: True to put the image as a VDI in an SR, False to place |
187 | 208 | it on dom0's filesystem. The former is for VM disks, the latter for | 258 | it on dom0's filesystem. The former is for VM disks, the latter for |
189 | 209 | its kernel and ramdisk (if external kernels are being used).""" | 259 | its kernel and ramdisk (if external kernels are being used). |
190 | 260 | Returns a Deferred that gives the new VDI UUID.""" | ||
191 | 210 | 261 | ||
192 | 211 | url = images.image_url(image) | 262 | url = images.image_url(image) |
193 | 212 | access = AuthManager().get_access_key(user, project) | 263 | access = AuthManager().get_access_key(user, project) |
194 | @@ -218,22 +269,31 @@ | |||
195 | 218 | args['password'] = user.secret | 269 | args['password'] = user.secret |
196 | 219 | if use_sr: | 270 | if use_sr: |
197 | 220 | args['add_partition'] = 'true' | 271 | args['add_partition'] = 'true' |
200 | 221 | return self._call_plugin('objectstore', fn, args) | 272 | task = yield self._async_call_plugin('objectstore', fn, args) |
201 | 222 | 273 | uuid = yield self._wait_for_task(task) | |
202 | 274 | defer.returnValue(uuid) | ||
203 | 275 | |||
204 | 276 | |||
205 | 277 | @defer.inlineCallbacks | ||
206 | 223 | def reboot(self, instance): | 278 | def reboot(self, instance): |
208 | 224 | vm = self.lookup(instance.name) | 279 | vm = yield self._lookup(instance.name) |
209 | 225 | if vm is None: | 280 | if vm is None: |
210 | 226 | raise Exception('instance not present %s' % instance.name) | 281 | raise Exception('instance not present %s' % instance.name) |
213 | 227 | yield self._conn.xenapi.VM.clean_reboot(vm) | 282 | task = yield self._call_xenapi('Async.VM.clean_reboot', vm) |
214 | 228 | 283 | yield self._wait_for_task(task) | |
215 | 284 | |||
216 | 285 | |||
217 | 286 | @defer.inlineCallbacks | ||
218 | 229 | def destroy(self, instance): | 287 | def destroy(self, instance): |
220 | 230 | vm = self.lookup(instance.name) | 288 | vm = yield self._lookup(instance.name) |
221 | 231 | if vm is None: | 289 | if vm is None: |
222 | 232 | raise Exception('instance not present %s' % instance.name) | 290 | raise Exception('instance not present %s' % instance.name) |
224 | 233 | yield self._conn.xenapi.VM.destroy(vm) | 291 | task = yield self._call_xenapi('Async.VM.destroy', vm) |
225 | 292 | yield self._wait_for_task(task) | ||
226 | 293 | |||
227 | 234 | 294 | ||
228 | 235 | def get_info(self, instance_id): | 295 | def get_info(self, instance_id): |
230 | 236 | vm = self.lookup(instance_id) | 296 | vm = self._lookup_blocking(instance_id) |
231 | 237 | if vm is None: | 297 | if vm is None: |
232 | 238 | raise Exception('instance not present %s' % instance_id) | 298 | raise Exception('instance not present %s' % instance_id) |
233 | 239 | rec = self._conn.xenapi.VM.get_record(vm) | 299 | rec = self._conn.xenapi.VM.get_record(vm) |
234 | @@ -243,7 +303,13 @@ | |||
235 | 243 | 'num_cpu': rec['VCPUs_max'], | 303 | 'num_cpu': rec['VCPUs_max'], |
236 | 244 | 'cpu_time': 0} | 304 | 'cpu_time': 0} |
237 | 245 | 305 | ||
239 | 246 | def lookup(self, i): | 306 | |
240 | 307 | @deferredToThread | ||
241 | 308 | def _lookup(self, i): | ||
242 | 309 | return self._lookup_blocking(i) | ||
243 | 310 | |||
244 | 311 | |||
245 | 312 | def _lookup_blocking(self, i): | ||
246 | 247 | vms = self._conn.xenapi.VM.get_by_name_label(i) | 313 | vms = self._conn.xenapi.VM.get_by_name_label(i) |
247 | 248 | n = len(vms) | 314 | n = len(vms) |
248 | 249 | if n == 0: | 315 | if n == 0: |
249 | @@ -253,11 +319,59 @@ | |||
250 | 253 | else: | 319 | else: |
251 | 254 | return vms[0] | 320 | return vms[0] |
252 | 255 | 321 | ||
254 | 256 | def _call_plugin(self, plugin, fn, args): | 322 | |
255 | 323 | def _wait_for_task(self, task): | ||
256 | 324 | """Return a Deferred that will give the result of the given task. | ||
257 | 325 | The task is polled until it completes.""" | ||
258 | 326 | d = defer.Deferred() | ||
259 | 327 | reactor.callLater(0, self._poll_task, task, d) | ||
260 | 328 | return d | ||
261 | 329 | |||
262 | 330 | |||
263 | 331 | @deferredToThread | ||
264 | 332 | def _poll_task(self, task, deferred): | ||
265 | 333 | """Poll the given XenAPI task, and fire the given Deferred if we | ||
266 | 334 | get a result.""" | ||
267 | 335 | try: | ||
268 | 336 | #logging.debug('Polling task %s...', task) | ||
269 | 337 | status = self._conn.xenapi.task.get_status(task) | ||
270 | 338 | if status == 'pending': | ||
271 | 339 | reactor.callLater(FLAGS.xenapi_task_poll_interval, | ||
272 | 340 | self._poll_task, task, deferred) | ||
273 | 341 | elif status == 'success': | ||
274 | 342 | result = self._conn.xenapi.task.get_result(task) | ||
275 | 343 | logging.info('Task %s status: success. %s', task, result) | ||
276 | 344 | deferred.callback(_parse_xmlrpc_value(result)) | ||
277 | 345 | else: | ||
278 | 346 | error_info = self._conn.xenapi.task.get_error_info(task) | ||
279 | 347 | logging.warn('Task %s status: %s. %s', task, status, | ||
280 | 348 | error_info) | ||
281 | 349 | deferred.errback(XenAPI.Failure(error_info)) | ||
282 | 350 | #logging.debug('Polling task %s done.', task) | ||
283 | 351 | except Exception, exn: | ||
284 | 352 | logging.warn(exn) | ||
285 | 353 | deferred.errback(exn) | ||
286 | 354 | |||
287 | 355 | |||
288 | 356 | @deferredToThread | ||
289 | 357 | def _call_xenapi(self, method, *args): | ||
290 | 358 | """Call the specified XenAPI method on a background thread. Returns | ||
291 | 359 | a Deferred for the result.""" | ||
292 | 360 | f = self._conn.xenapi | ||
293 | 361 | for m in method.split('.'): | ||
294 | 362 | f = f.__getattr__(m) | ||
295 | 363 | return f(*args) | ||
296 | 364 | |||
297 | 365 | |||
298 | 366 | @deferredToThread | ||
299 | 367 | def _async_call_plugin(self, plugin, fn, args): | ||
300 | 368 | """Call Async.host.call_plugin on a background thread. Returns a | ||
301 | 369 | Deferred with the task reference.""" | ||
302 | 257 | return _unwrap_plugin_exceptions( | 370 | return _unwrap_plugin_exceptions( |
304 | 258 | self._conn.xenapi.host.call_plugin, | 371 | self._conn.xenapi.Async.host.call_plugin, |
305 | 259 | self._get_xenapi_host(), plugin, fn, args) | 372 | self._get_xenapi_host(), plugin, fn, args) |
306 | 260 | 373 | ||
307 | 374 | |||
308 | 261 | def _get_xenapi_host(self): | 375 | def _get_xenapi_host(self): |
309 | 262 | return self._conn.xenapi.session.get_this_host(self._conn.handle) | 376 | return self._conn.xenapi.session.get_this_host(self._conn.handle) |
310 | 263 | 377 | ||
311 | @@ -281,3 +395,15 @@ | |||
312 | 281 | except xmlrpclib.ProtocolError, exn: | 395 | except xmlrpclib.ProtocolError, exn: |
313 | 282 | logging.debug("Got exception: %s", exn) | 396 | logging.debug("Got exception: %s", exn) |
314 | 283 | raise | 397 | raise |
315 | 398 | |||
316 | 399 | |||
317 | 400 | def _parse_xmlrpc_value(val): | ||
318 | 401 | """Parse the given value as if it were an XML-RPC value. This is | ||
319 | 402 | sometimes used as the format for the task.result field.""" | ||
320 | 403 | if not val: | ||
321 | 404 | return val | ||
322 | 405 | x = xmlrpclib.loads( | ||
323 | 406 | '<?xml version="1.0"?><methodResponse><params><param>' + | ||
324 | 407 | val + | ||
325 | 408 | '</param></params></methodResponse>') | ||
326 | 409 | return x[0][0] |
Really nice work, Ewan! No criticism at all from me! Feel free to uncomment the logging.debug() output, though. :)