Merge lp:~rconradharris/nova/server_progress into lp:~hudson-openstack/nova/trunk

Proposed by Rick Harris
Status: Merged
Approved by: Matt Dietz
Approved revision: 1575
Merged at revision: 1613
Proposed branch: lp:~rconradharris/nova/server_progress
Merge into: lp:~hudson-openstack/nova/trunk
Diff against target: 1033 lines (+322/-110)
22 files modified
nova/api/openstack/views/servers.py (+5/-5)
nova/compute/api.py (+6/-3)
nova/compute/manager.py (+9/-6)
nova/db/sqlalchemy/migrate_repo/versions/049_add_instances_progress.py (+43/-0)
nova/db/sqlalchemy/models.py (+2/-0)
nova/flags.py (+4/-0)
nova/tests/api/openstack/contrib/test_createserverext.py (+4/-2)
nova/tests/api/openstack/contrib/test_volumes.py (+2/-1)
nova/tests/api/openstack/test_server_actions.py (+1/-0)
nova/tests/api/openstack/test_servers.py (+10/-5)
nova/tests/test_compute.py (+1/-1)
nova/tests/test_virt_drivers.py (+2/-1)
nova/tests/test_xenapi.py (+45/-31)
nova/tests/xenapi/stubs.py (+5/-2)
nova/utils.py (+14/-0)
nova/virt/driver.py (+10/-6)
nova/virt/fake.py (+3/-3)
nova/virt/xenapi/vm_utils.py (+4/-0)
nova/virt/xenapi/vmops.py (+126/-30)
nova/virt/xenapi_conn.py (+14/-9)
plugins/xenserver/xenapi/etc/xapi.d/plugins/glance (+5/-1)
plugins/xenserver/xenapi/etc/xapi.d/plugins/migration (+7/-4)
To merge this branch: bzr merge lp:~rconradharris/nova/server_progress
Reviewer Review Type Date Requested Status
Trey Morris (community) Approve
Matt Dietz (community) Approve
Review via email: mp+75805@code.launchpad.net

Description of the change

This patch adds instance progress which is used by the OpenStack API to indicate how far along the current executing action is (BUILD/REBUILD, MIGRATION/RESIZE).

For the first cut, we decided to keep it simple and compute progress by counting discrete steps. This is not ideal since some steps, in particular, steps which involve transferring large amounts of data over the network, take *much* longer than others. A better approximation would account for the data-transferred to the destination host, since in most cases, this dominates the time spent.

In addition to adding progress, this patch:

- Allows resizes to use same host for source and destination which is useful for dev environments without a second host. This is enabled by the --allow_resize_to_same_host flag.

- Fixes a bug in the glance and migration XenAPI plugins where the VHDs were being copied into the SR in the wrong order. Before the base-copy was copied first meaning it was possible for snapwatchd to see the base-copy before the dependent cow was present. It was treat the base_copy as an unreferenced parent, and GC it.

- Additional refactoring and cleanups.

To post a comment you must log in.
Revision history for this message
Matt Dietz (cerberus) wrote :

So this looks really good from an implementation perspective. Beyond that, I really don't like that this kind of functionality is part of the API spec, which is of course not your fault. The following is a diatribe against the idea, not the patch.

Though the progress is done in a way that mostly keeps all the moving parts decoupled (since it's just a guess based on the number of expected steps rather than disk progress, as you mention,) I'm really not a fan of the false sense of security it will provide end users. Questions like "Why is my Linux instance at 25% for a quarter of the time a Windows instance is?" will pop up shortly. Moving to a model of disk progress has all sorts of other scary implications, like trying to track the copy progress through the XS plugin that is currently running (yuck) and I don't even know how you'd do it for the other hypervisors. It also opens up all kinds of synchronization issues when you start including zones and trying to bubble that data up to the API in a reasonably timely manner.

Long story short, I think including progress counters is a very strong statement regarding the type of cloud you expect it to be, and in this case, I don't think synchronous features like progress are indicative of the sort of scale we expect Nova to achieve.

If the community decides they want this, you can consider the above an "approve." I don't want to block it just because I might have a different idea of the Way Things Should Be™

Revision history for this message
Jay Pipes (jaypipes) wrote :

> So this looks really good from an implementation perspective. Beyond that, I
> really don't like that this kind of functionality is part of the API spec,
> which is of course not your fault. The following is a diatribe against the
> idea, not the patch.
>
> Though the progress is done in a way that mostly keeps all the moving parts
> decoupled (since it's just a guess based on the number of expected steps
> rather than disk progress, as you mention,) I'm really not a fan of the false
> sense of security it will provide end users. Questions like "Why is my Linux
> instance at 25% for a quarter of the time a Windows instance is?" will pop up
> shortly. Moving to a model of disk progress has all sorts of other scary
> implications, like trying to track the copy progress through the XS plugin
> that is currently running (yuck) and I don't even know how you'd do it for the
> other hypervisors. It also opens up all kinds of synchronization issues when
> you start including zones and trying to bubble that data up to the API in a
> reasonably timely manner.
>
> Long story short, I think including progress counters is a very strong
> statement regarding the type of cloud you expect it to be, and in this case, I
> don't think synchronous features like progress are indicative of the sort of
> scale we expect Nova to achieve.
>
> If the community decides they want this, you can consider the above an
> "approve." I don't want to block it just because I might have a different idea
> of the Way Things Should Be™

+10

Revision history for this message
Rick Harris (rconradharris) wrote :

> Long story short, I think including progress counters is a very strong
> statement regarding the type of cloud you expect it to be, and in this case, I
> don't think synchronous features like progress are indicative of the sort of
> scale we expect Nova to achieve.

Nailed it.

Feels weird to argue against my own patch, but you're absolutely right.

If we get a consensus that this patch should be rejected, I'll go ahead and extract out the misc fixes I included in this patch so we don't lose that work.

Revision history for this message
Naveed Massjouni (ironcamel) wrote :

I do not see where you are testing the new server progress feature you have added. For example, I see no new assertions in the tests. Is it because this is difficult to test and should be tested at a higher level?

Also, it looks like you have only added the progress feature for xen. Is this correct?

Revision history for this message
Rick Harris (rconradharris) wrote :

> I do not see where you are testing the new server progress feature you have added. For example, I
> see no new assertions in the tests. Is it because this is difficult to test and should be tested at
> a higher level?

Automated functional testing for this didn't feel like it would be worth the additional effort. This was a judgement call, and feel free to disagree here, but it seemed like any tests for this would be slow, difficult to write, and not very useful.

The reason I think it would not be very useful is that, unlike some areas of the code where we have complex interactions, this is very straightforward. That's not to say this code can't break, it's just that the chances of that happening inadvertently are probably small relative to the pain of testing this.

> Also, it looks like you have only added the progress feature for xen. Is this correct?

Yeah adding it to XenServer for now since that's the hypervisor I'm most familiar with. At a high level the steps should be similar, but each virt driver may have a slightly different process for builds and migrations--and I'm not familiar enough with libvirt, etc to add those myself, without considerable additional research.

Revision history for this message
Trey Morris (tr3buchet) wrote :

agree

review: Disapprove
Revision history for this message
Rick Harris (rconradharris) wrote :

TL;DR: Agree that instance progress is a bad idea, we should merge this anyway and reevaluate our options at the Essex Design Summit.

This patch is catching a lot of heat, and for good reason. The current implementation, while simple, is not all that accurate; and creating accurate progress at scale may turn out to be either far too complex or outright impossible.

To those points, I'm in complete agreement and at the Essex design summit I will argue against including any kind of instance progress.

All that said, I still think we should merge this patch in the interim. The reasons are:

1. REQUIREMENT: We committed to supporting the Openstack API and progress is a required part of the spec. If we're going to abandon part of the spec for technical reasons, that's something we should hash out in a formal setting where we can have some good back-and-forth on this.

Until we decide to change course, though, I think we should continue our march towards 100% compatibility.

2. EASY-TO-REMOVE: When and if we decide to drop support for instance progress, removing the code should be a dead-simple; it's just a matter of removing the update_progress method and its callers, dropping the column, and having the OSAPI return to its original behavior of stubbing out progress to be 0 or 100% depending on status.

3. INCLUDES OTHER USEFUL FIXES/REFACTORINGS: This patch includes additional bug-fixes and code cleanups which would be nice to have merged in. They could be extracted into a separate patch (in hindsight that would have been the right approach), but since point 1) is (to me) a hard-requirement, the effort of separating out these fixes is probably not worth it.

Revision history for this message
Trey Morris (tr3buchet) wrote :

You have valid reasons for merging, and I definitely agree about hashing it out in a formal setting. I'd be on the side against in this setting.

Ease of code removal isn't the issue. My fear is once progress bars are supported we'll never be able to get rid of them. Much the same as how they were supported in the last cloud implementation and are still plaguing us in the next. It may take you <5 minutes to remove it, but months of meetings and bickering beforehand for us to convince people to "allow" it to be removed; to decide that would be okay. But if it were never merged in the first place and we decide against, there's the end of it.

Revision history for this message
Trey Morris (tr3buchet) wrote :

I hate the feeling that I'm opposing you in this Rick. We really should be arguing against this being in the spec..

Revision history for this message
Matt Dietz (cerberus) wrote :

Rick: ++

review: Approve
Revision history for this message
Rick Harris (rconradharris) wrote :

Trey: I would say you're not really opposing me here, we're basically seeing things the same way.

I think the one difference is: I don't think we can pull an audible on API requirements, those need to be called in from the sidelines.

Once Essex gets here I'm sure we'll have a nice, vocal discussion as to whether we should break compatibility.

Revision history for this message
Trey Morris (tr3buchet) wrote :

Regrettably, I could do nothing to prevent the disaster which devastated Tristram.

review: Approve
Revision history for this message
Naveed Massjouni (ironcamel) wrote :

> Regrettably, I could do nothing to prevent the disaster which devastated
> Tristram.

Did you fall hopelessly in love with Rick?

Revision history for this message
OpenStack Infra (hudson-openstack) wrote :

Attempt to merge into lp:nova failed due to conflicts:

text conflict in nova/utils.py
text conflict in nova/virt/fake.py

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
=== modified file 'nova/api/openstack/views/servers.py'
--- nova/api/openstack/views/servers.py 2011-09-09 13:48:38 +0000
+++ nova/api/openstack/views/servers.py 2011-09-21 21:22:24 +0000
@@ -137,11 +137,11 @@
137 response = super(ViewBuilderV11, self)._build_detail(inst)137 response = super(ViewBuilderV11, self)._build_detail(inst)
138 response['server']['created'] = utils.isotime(inst['created_at'])138 response['server']['created'] = utils.isotime(inst['created_at'])
139 response['server']['updated'] = utils.isotime(inst['updated_at'])139 response['server']['updated'] = utils.isotime(inst['updated_at'])
140 if 'status' in response['server']:140
141 if response['server']['status'] == "ACTIVE":141 status = response['server'].get('status')
142 response['server']['progress'] = 100142 if status in ('ACTIVE', 'BUILD', 'REBUILD', 'RESIZE',
143 elif response['server']['status'] == "BUILD":143 'VERIFY_RESIZE'):
144 response['server']['progress'] = 0144 response['server']['progress'] = inst['progress'] or 0
145145
146 response['server']['accessIPv4'] = inst.get('access_ip_v4') or ""146 response['server']['accessIPv4'] = inst.get('access_ip_v4') or ""
147 response['server']['accessIPv6'] = inst.get('access_ip_v6') or ""147 response['server']['accessIPv6'] = inst.get('access_ip_v6') or ""
148148
=== modified file 'nova/compute/api.py'
--- nova/compute/api.py 2011-09-21 20:17:05 +0000
+++ nova/compute/api.py 2011-09-21 21:22:24 +0000
@@ -846,7 +846,8 @@
846846
847 self.update(context,847 self.update(context,
848 instance_id,848 instance_id,
849 task_state=task_states.DELETING)849 task_state=task_states.DELETING,
850 progress=0)
850851
851 host = instance['host']852 host = instance['host']
852 if host:853 if host:
@@ -869,7 +870,8 @@
869 instance_id,870 instance_id,
870 vm_state=vm_states.ACTIVE,871 vm_state=vm_states.ACTIVE,
871 task_state=task_states.STOPPING,872 task_state=task_states.STOPPING,
872 terminated_at=utils.utcnow())873 terminated_at=utils.utcnow(),
874 progress=0)
873875
874 host = instance['host']876 host = instance['host']
875 if host:877 if host:
@@ -1169,7 +1171,8 @@
1169 display_name=name,1171 display_name=name,
1170 image_ref=image_href,1172 image_ref=image_href,
1171 vm_state=vm_states.ACTIVE,1173 vm_state=vm_states.ACTIVE,
1172 task_state=task_states.REBUILDING)1174 task_state=task_states.REBUILDING,
1175 progress=0)
11731176
1174 rebuild_params = {1177 rebuild_params = {
1175 "new_pass": admin_password,1178 "new_pass": admin_password,
11761179
=== modified file 'nova/compute/manager.py'
--- nova/compute/manager.py 2011-09-21 15:54:30 +0000
+++ nova/compute/manager.py 2011-09-21 21:22:24 +0000
@@ -882,7 +882,9 @@
882 migration_ref.instance_uuid)882 migration_ref.instance_uuid)
883883
884 network_info = self._get_instance_nw_info(context, instance_ref)884 network_info = self._get_instance_nw_info(context, instance_ref)
885 self.driver.destroy(instance_ref, network_info)885 self.driver.confirm_migration(
886 migration_ref, instance_ref, network_info)
887
886 usage_info = utils.usage_from_instance(instance_ref)888 usage_info = utils.usage_from_instance(instance_ref)
887 notifier.notify('compute.%s' % self.host,889 notifier.notify('compute.%s' % self.host,
888 'compute.instance.resize.confirm',890 'compute.instance.resize.confirm',
@@ -937,7 +939,7 @@
937 local_gb=instance_type['local_gb'],939 local_gb=instance_type['local_gb'],
938 instance_type_id=instance_type['id'])940 instance_type_id=instance_type['id'])
939941
940 self.driver.revert_migration(instance_ref)942 self.driver.finish_revert_migration(instance_ref)
941 self.db.migration_update(context, migration_id,943 self.db.migration_update(context, migration_id,
942 {'status': 'reverted'})944 {'status': 'reverted'})
943 usage_info = utils.usage_from_instance(instance_ref)945 usage_info = utils.usage_from_instance(instance_ref)
@@ -961,7 +963,8 @@
961 # of the instance down963 # of the instance down
962 instance_ref = self.db.instance_get_by_uuid(context, instance_id)964 instance_ref = self.db.instance_get_by_uuid(context, instance_id)
963965
964 if instance_ref['host'] == FLAGS.host:966 same_host = instance_ref['host'] == FLAGS.host
967 if same_host and not FLAGS.allow_resize_to_same_host:
965 self._instance_update(context,968 self._instance_update(context,
966 instance_id,969 instance_id,
967 vm_state=vm_states.ERROR)970 vm_state=vm_states.ERROR)
@@ -1012,7 +1015,7 @@
1012 {'status': 'migrating'})1015 {'status': 'migrating'})
10131016
1014 disk_info = self.driver.migrate_disk_and_power_off(1017 disk_info = self.driver.migrate_disk_and_power_off(
1015 instance_ref, migration_ref['dest_host'])1018 context, instance_ref, migration_ref['dest_host'])
1016 self.db.migration_update(context,1019 self.db.migration_update(context,
1017 migration_id,1020 migration_id,
1018 {'status': 'post-migrating'})1021 {'status': 'post-migrating'})
@@ -1057,8 +1060,8 @@
1057 instance_ref.uuid)1060 instance_ref.uuid)
10581061
1059 network_info = self._get_instance_nw_info(context, instance_ref)1062 network_info = self._get_instance_nw_info(context, instance_ref)
1060 self.driver.finish_migration(context, instance_ref, disk_info,1063 self.driver.finish_migration(context, migration_ref, instance_ref,
1061 network_info, resize_instance)1064 disk_info, network_info, resize_instance)
10621065
1063 self._instance_update(context,1066 self._instance_update(context,
1064 instance_id,1067 instance_id,
10651068
=== added file 'nova/db/sqlalchemy/migrate_repo/versions/049_add_instances_progress.py'
--- nova/db/sqlalchemy/migrate_repo/versions/049_add_instances_progress.py 1970-01-01 00:00:00 +0000
+++ nova/db/sqlalchemy/migrate_repo/versions/049_add_instances_progress.py 2011-09-21 21:22:24 +0000
@@ -0,0 +1,43 @@
1# vim: tabstop=4 shiftwidth=4 softtabstop=4
2
3# Copyright 2010 OpenStack LLC.
4#
5# Licensed under the Apache License, Version 2.0 (the "License"); you may
6# not use this file except in compliance with the License. You may obtain
7# a copy of the License at
8#
9# http://www.apache.org/licenses/LICENSE-2.0
10#
11# Unless required by applicable law or agreed to in writing, software
12# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
13# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
14# License for the specific language governing permissions and limitations
15# under the License.
16
17from sqlalchemy import *
18from migrate import *
19
20from nova import log as logging
21
22meta = MetaData()
23
24instances = Table('instances', meta,
25 Column("id", Integer(), primary_key=True, nullable=False))
26
27# Add progress column to instances table
28progress = Column('progress', Integer())
29
30
31def upgrade(migrate_engine):
32 meta.bind = migrate_engine
33
34 try:
35 instances.create_column(progress)
36 except Exception:
37 logging.error(_("progress column not added to instances table"))
38 raise
39
40
41def downgrade(migrate_engine):
42 meta.bind = migrate_engine
43 instances.drop_column(progress)
044
=== modified file 'nova/db/sqlalchemy/models.py'
--- nova/db/sqlalchemy/models.py 2011-09-21 12:04:37 +0000
+++ nova/db/sqlalchemy/models.py 2011-09-21 21:22:24 +0000
@@ -241,6 +241,8 @@
241 access_ip_v4 = Column(String(255))241 access_ip_v4 = Column(String(255))
242 access_ip_v6 = Column(String(255))242 access_ip_v6 = Column(String(255))
243243
244 progress = Column(Integer)
245
244246
245class VirtualStorageArray(BASE, NovaBase):247class VirtualStorageArray(BASE, NovaBase):
246 """248 """
247249
=== modified file 'nova/flags.py'
--- nova/flags.py 2011-09-20 17:41:23 +0000
+++ nova/flags.py 2011-09-21 21:22:24 +0000
@@ -434,3 +434,7 @@
434 'nova.compute.api:nova.notifier.api.notify_decorator'],434 'nova.compute.api:nova.notifier.api.notify_decorator'],
435 'Module list representing monkey '435 'Module list representing monkey '
436 'patched module and decorator')436 'patched module and decorator')
437
438DEFINE_bool('allow_resize_to_same_host', False,
439 'Allow destination machine to match source for resize. Useful'
440 ' when testing in environments with only one host machine.')
437441
=== modified file 'nova/tests/api/openstack/contrib/test_createserverext.py'
--- nova/tests/api/openstack/contrib/test_createserverext.py 2011-09-09 01:30:21 +0000
+++ nova/tests/api/openstack/contrib/test_createserverext.py 2011-09-21 21:22:24 +0000
@@ -1,4 +1,4 @@
1# vim: tabstop=4 shiftwidth=4 softtabstop=41# vim: tabstop=5 shiftwidth=4 softtabstop=4
22
3# Copyright 2010-2011 OpenStack LLC.3# Copyright 2010-2011 OpenStack LLC.
4# All Rights Reserved.4# All Rights Reserved.
@@ -54,6 +54,7 @@
54 "created_at": datetime.datetime(2010, 10, 10, 12, 0, 0),54 "created_at": datetime.datetime(2010, 10, 10, 12, 0, 0),
55 "updated_at": datetime.datetime(2010, 11, 11, 11, 0, 0),55 "updated_at": datetime.datetime(2010, 11, 11, 11, 0, 0),
56 "security_groups": [{"id": 1, "name": "test"}],56 "security_groups": [{"id": 1, "name": "test"}],
57 "progress": 0,
57 "image_ref": 'http://foo.com/123',58 "image_ref": 'http://foo.com/123',
58 "instance_type": {"flavorid": '124'},59 "instance_type": {"flavorid": '124'},
59 }60 }
@@ -119,7 +120,8 @@
119 'user_id': 'fake',120 'user_id': 'fake',
120 'project_id': 'fake',121 'project_id': 'fake',
121 'created_at': "",122 'created_at': "",
122 'updated_at': ""}]123 'updated_at': "",
124 'progress': 0}]
123125
124 def set_admin_password(self, *args, **kwargs):126 def set_admin_password(self, *args, **kwargs):
125 pass127 pass
126128
=== modified file 'nova/tests/api/openstack/contrib/test_volumes.py'
--- nova/tests/api/openstack/contrib/test_volumes.py 2011-09-16 18:35:10 +0000
+++ nova/tests/api/openstack/contrib/test_volumes.py 2011-09-21 21:22:24 +0000
@@ -1,4 +1,4 @@
1# Copyright 2011 Josh Durgin1# Copyright 2013 Josh Durgin
2# All Rights Reserved.2# All Rights Reserved.
3#3#
4# Licensed under the Apache License, Version 2.0 (the "License"); you may4# Licensed under the Apache License, Version 2.0 (the "License"); you may
@@ -43,6 +43,7 @@
43 'project_id': 'fake',43 'project_id': 'fake',
44 'created_at': datetime.datetime(2010, 10, 10, 12, 0, 0),44 'created_at': datetime.datetime(2010, 10, 10, 12, 0, 0),
45 'updated_at': datetime.datetime(2010, 11, 11, 11, 0, 0),45 'updated_at': datetime.datetime(2010, 11, 11, 11, 0, 0),
46 'progress': 0
46 }]47 }]
4748
4849
4950
=== modified file 'nova/tests/api/openstack/test_server_actions.py'
--- nova/tests/api/openstack/test_server_actions.py 2011-09-06 11:31:39 +0000
+++ nova/tests/api/openstack/test_server_actions.py 2011-09-21 21:22:24 +0000
@@ -91,6 +91,7 @@
91 "access_ip_v6": "",91 "access_ip_v6": "",
92 "uuid": "aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa",92 "uuid": "aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa",
93 "virtual_interfaces": [],93 "virtual_interfaces": [],
94 "progress": 0,
94 }95 }
9596
96 instance["fixed_ips"] = {97 instance["fixed_ips"] = {
9798
=== modified file 'nova/tests/api/openstack/test_servers.py'
--- nova/tests/api/openstack/test_servers.py 2011-09-21 15:54:30 +0000
+++ nova/tests/api/openstack/test_servers.py 2011-09-21 21:22:24 +0000
@@ -162,7 +162,7 @@
162 vm_state=None, task_state=None,162 vm_state=None, task_state=None,
163 reservation_id="", uuid=FAKE_UUID, image_ref="10",163 reservation_id="", uuid=FAKE_UUID, image_ref="10",
164 flavor_id="1", interfaces=None, name=None, key_name='',164 flavor_id="1", interfaces=None, name=None, key_name='',
165 access_ipv4=None, access_ipv6=None):165 access_ipv4=None, access_ipv6=None, progress=0):
166 metadata = []166 metadata = []
167 metadata.append(InstanceMetadata(key='seq', value=id))167 metadata.append(InstanceMetadata(key='seq', value=id))
168168
@@ -222,7 +222,8 @@
222 "access_ip_v4": access_ipv4,222 "access_ip_v4": access_ipv4,
223 "access_ip_v6": access_ipv6,223 "access_ip_v6": access_ipv6,
224 "uuid": uuid,224 "uuid": uuid,
225 "virtual_interfaces": interfaces}225 "virtual_interfaces": interfaces,
226 "progress": progress}
226227
227 instance["fixed_ips"] = {228 instance["fixed_ips"] = {
228 "address": private_address,229 "address": private_address,
@@ -548,7 +549,8 @@
548 },549 },
549 ]550 ]
550 new_return_server = return_server_with_attributes(551 new_return_server = return_server_with_attributes(
551 interfaces=interfaces, vm_state=vm_states.ACTIVE)552 interfaces=interfaces, vm_state=vm_states.ACTIVE,
553 progress=100)
552 self.stubs.Set(nova.db.api, 'instance_get', new_return_server)554 self.stubs.Set(nova.db.api, 'instance_get', new_return_server)
553555
554 req = webob.Request.blank('/v1.1/fake/servers/1')556 req = webob.Request.blank('/v1.1/fake/servers/1')
@@ -645,7 +647,7 @@
645 ]647 ]
646 new_return_server = return_server_with_attributes(648 new_return_server = return_server_with_attributes(
647 interfaces=interfaces, vm_state=vm_states.ACTIVE,649 interfaces=interfaces, vm_state=vm_states.ACTIVE,
648 image_ref=image_ref, flavor_id=flavor_id)650 image_ref=image_ref, flavor_id=flavor_id, progress=100)
649 self.stubs.Set(nova.db.api, 'instance_get', new_return_server)651 self.stubs.Set(nova.db.api, 'instance_get', new_return_server)
650652
651 req = webob.Request.blank('/v1.1/fake/servers/1')653 req = webob.Request.blank('/v1.1/fake/servers/1')
@@ -1527,6 +1529,7 @@
1527 "created_at": datetime.datetime(2010, 10, 10, 12, 0, 0),1529 "created_at": datetime.datetime(2010, 10, 10, 12, 0, 0),
1528 "updated_at": datetime.datetime(2010, 11, 11, 11, 0, 0),1530 "updated_at": datetime.datetime(2010, 11, 11, 11, 0, 0),
1529 "config_drive": self.config_drive,1531 "config_drive": self.config_drive,
1532 "progress": 0
1530 }1533 }
15311534
1532 def server_update(context, id, params):1535 def server_update(context, id, params):
@@ -3819,7 +3822,8 @@
3819 "accessIPv6": "fead::1234",3822 "accessIPv6": "fead::1234",
3820 #"address": ,3823 #"address": ,
3821 #"floating_ips": [{"address":ip} for ip in public_addresses]}3824 #"floating_ips": [{"address":ip} for ip in public_addresses]}
3822 "uuid": "deadbeef-feed-edee-beef-d0ea7beefedd"}3825 "uuid": "deadbeef-feed-edee-beef-d0ea7beefedd",
3826 "progress": 0}
38233827
3824 return instance3828 return instance
38253829
@@ -3942,6 +3946,7 @@
3942 def test_build_server_detail_active_status(self):3946 def test_build_server_detail_active_status(self):
3943 #set the power state of the instance to running3947 #set the power state of the instance to running
3944 self.instance['vm_state'] = vm_states.ACTIVE3948 self.instance['vm_state'] = vm_states.ACTIVE
3949 self.instance['progress'] = 100
3945 image_bookmark = "http://localhost/images/5"3950 image_bookmark = "http://localhost/images/5"
3946 flavor_bookmark = "http://localhost/flavors/1"3951 flavor_bookmark = "http://localhost/flavors/1"
3947 expected_server = {3952 expected_server = {
39483953
=== modified file 'nova/tests/test_compute.py'
--- nova/tests/test_compute.py 2011-09-19 21:53:17 +0000
+++ nova/tests/test_compute.py 2011-09-21 21:22:24 +0000
@@ -584,7 +584,7 @@
584 pass584 pass
585585
586 self.stubs.Set(self.compute.driver, 'finish_migration', fake)586 self.stubs.Set(self.compute.driver, 'finish_migration', fake)
587 self.stubs.Set(self.compute.driver, 'revert_migration', fake)587 self.stubs.Set(self.compute.driver, 'finish_revert_migration', fake)
588 self.stubs.Set(self.compute.network_api, 'get_instance_nw_info', fake)588 self.stubs.Set(self.compute.network_api, 'get_instance_nw_info', fake)
589589
590 self.compute.run_instance(self.context, instance_id)590 self.compute.run_instance(self.context, instance_id)
591591
=== modified file 'nova/tests/test_virt_drivers.py'
--- nova/tests/test_virt_drivers.py 2011-09-15 19:09:14 +0000
+++ nova/tests/test_virt_drivers.py 2011-09-21 21:22:24 +0000
@@ -185,7 +185,8 @@
185 instance_ref = test_utils.get_test_instance()185 instance_ref = test_utils.get_test_instance()
186 network_info = test_utils.get_test_network_info()186 network_info = test_utils.get_test_network_info()
187 self.connection.spawn(self.ctxt, instance_ref, network_info)187 self.connection.spawn(self.ctxt, instance_ref, network_info)
188 self.connection.migrate_disk_and_power_off(instance_ref, 'dest_host')188 self.connection.migrate_disk_and_power_off(
189 self.ctxt, instance_ref, 'dest_host')
189190
190 @catch_notimplementederror191 @catch_notimplementederror
191 def test_pause(self):192 def test_pause(self):
192193
=== modified file 'nova/tests/test_xenapi.py'
--- nova/tests/test_xenapi.py 2011-09-13 20:33:34 +0000
+++ nova/tests/test_xenapi.py 2011-09-21 21:22:24 +0000
@@ -76,7 +76,7 @@
76 db_fakes.stub_out_db_instance_api(self.stubs)76 db_fakes.stub_out_db_instance_api(self.stubs)
77 stubs.stub_out_get_target(self.stubs)77 stubs.stub_out_get_target(self.stubs)
78 xenapi_fake.reset()78 xenapi_fake.reset()
79 self.values = {'id': 1,79 self.instance_values = {'id': 1,
80 'project_id': self.user_id,80 'project_id': self.user_id,
81 'user_id': 'fake',81 'user_id': 'fake',
82 'image_ref': 1,82 'image_ref': 1,
@@ -132,7 +132,7 @@
132 stubs.stubout_session(self.stubs, stubs.FakeSessionForVolumeTests)132 stubs.stubout_session(self.stubs, stubs.FakeSessionForVolumeTests)
133 conn = xenapi_conn.get_connection(False)133 conn = xenapi_conn.get_connection(False)
134 volume = self._create_volume()134 volume = self._create_volume()
135 instance = db.instance_create(self.context, self.values)135 instance = db.instance_create(self.context, self.instance_values)
136 vm = xenapi_fake.create_vm(instance.name, 'Running')136 vm = xenapi_fake.create_vm(instance.name, 'Running')
137 result = conn.attach_volume(instance.name, volume['id'], '/dev/sdc')137 result = conn.attach_volume(instance.name, volume['id'], '/dev/sdc')
138138
@@ -152,7 +152,7 @@
152 stubs.FakeSessionForVolumeFailedTests)152 stubs.FakeSessionForVolumeFailedTests)
153 conn = xenapi_conn.get_connection(False)153 conn = xenapi_conn.get_connection(False)
154 volume = self._create_volume()154 volume = self._create_volume()
155 instance = db.instance_create(self.context, self.values)155 instance = db.instance_create(self.context, self.instance_values)
156 xenapi_fake.create_vm(instance.name, 'Running')156 xenapi_fake.create_vm(instance.name, 'Running')
157 self.assertRaises(Exception,157 self.assertRaises(Exception,
158 conn.attach_volume,158 conn.attach_volume,
@@ -369,7 +369,7 @@
369 create_record=True, empty_dns=False):369 create_record=True, empty_dns=False):
370 stubs.stubout_loopingcall_start(self.stubs)370 stubs.stubout_loopingcall_start(self.stubs)
371 if create_record:371 if create_record:
372 values = {'id': instance_id,372 instance_values = {'id': instance_id,
373 'project_id': self.project_id,373 'project_id': self.project_id,
374 'user_id': self.user_id,374 'user_id': self.user_id,
375 'image_ref': image_ref,375 'image_ref': image_ref,
@@ -379,7 +379,7 @@
379 'os_type': os_type,379 'os_type': os_type,
380 'hostname': hostname,380 'hostname': hostname,
381 'architecture': architecture}381 'architecture': architecture}
382 instance = db.instance_create(self.context, values)382 instance = db.instance_create(self.context, instance_values)
383 else:383 else:
384 instance = db.instance_get(self.context, instance_id)384 instance = db.instance_get(self.context, instance_id)
385 network_info = [({'bridge': 'fa0', 'id': 0, 'injected': True},385 network_info = [({'bridge': 'fa0', 'id': 0, 'injected': True},
@@ -623,28 +623,28 @@
623 # Ensure that it will not unrescue a non-rescued instance.623 # Ensure that it will not unrescue a non-rescued instance.
624 self.assertRaises(Exception, conn.unrescue, instance, None)624 self.assertRaises(Exception, conn.unrescue, instance, None)
625625
626 def test_revert_migration(self):626 def test_finish_revert_migration(self):
627 instance = self._create_instance()627 instance = self._create_instance()
628628
629 class VMOpsMock():629 class VMOpsMock():
630630
631 def __init__(self):631 def __init__(self):
632 self.revert_migration_called = False632 self.finish_revert_migration_called = False
633633
634 def revert_migration(self, instance):634 def finish_revert_migration(self, instance):
635 self.revert_migration_called = True635 self.finish_revert_migration_called = True
636636
637 stubs.stubout_session(self.stubs, stubs.FakeSessionForMigrationTests)637 stubs.stubout_session(self.stubs, stubs.FakeSessionForMigrationTests)
638638
639 conn = xenapi_conn.get_connection(False)639 conn = xenapi_conn.get_connection(False)
640 conn._vmops = VMOpsMock()640 conn._vmops = VMOpsMock()
641 conn.revert_migration(instance)641 conn.finish_revert_migration(instance)
642 self.assertTrue(conn._vmops.revert_migration_called)642 self.assertTrue(conn._vmops.finish_revert_migration_called)
643643
644 def _create_instance(self, instance_id=1, spawn=True):644 def _create_instance(self, instance_id=1, spawn=True):
645 """Creates and spawns a test instance."""645 """Creates and spawns a test instance."""
646 stubs.stubout_loopingcall_start(self.stubs)646 stubs.stubout_loopingcall_start(self.stubs)
647 values = {647 instance_values = {
648 'id': instance_id,648 'id': instance_id,
649 'project_id': self.project_id,649 'project_id': self.project_id,
650 'user_id': self.user_id,650 'user_id': self.user_id,
@@ -654,7 +654,7 @@
654 'instance_type_id': '3', # m1.large654 'instance_type_id': '3', # m1.large
655 'os_type': 'linux',655 'os_type': 'linux',
656 'architecture': 'x86-64'}656 'architecture': 'x86-64'}
657 instance = db.instance_create(self.context, values)657 instance = db.instance_create(self.context, instance_values)
658 network_info = [({'bridge': 'fa0', 'id': 0, 'injected': False},658 network_info = [({'bridge': 'fa0', 'id': 0, 'injected': False},
659 {'broadcast': '192.168.0.255',659 {'broadcast': '192.168.0.255',
660 'dns': ['192.168.0.1'],660 'dns': ['192.168.0.1'],
@@ -732,7 +732,7 @@
732 self.user_id = 'fake'732 self.user_id = 'fake'
733 self.project_id = 'fake'733 self.project_id = 'fake'
734 self.context = context.RequestContext(self.user_id, self.project_id)734 self.context = context.RequestContext(self.user_id, self.project_id)
735 self.values = {'id': 1,735 self.instance_values = {'id': 1,
736 'project_id': self.project_id,736 'project_id': self.project_id,
737 'user_id': self.user_id,737 'user_id': self.user_id,
738 'image_ref': 1,738 'image_ref': 1,
@@ -743,22 +743,34 @@
743 'os_type': 'linux',743 'os_type': 'linux',
744 'architecture': 'x86-64'}744 'architecture': 'x86-64'}
745745
746 migration_values = {
747 'source_compute': 'nova-compute',
748 'dest_compute': 'nova-compute',
749 'dest_host': '10.127.5.114',
750 'status': 'post-migrating',
751 'instance_uuid': '15f23e6a-cc6e-4d22-b651-d9bdaac316f7',
752 'old_instance_type_id': 5,
753 'new_instance_type_id': 1
754 }
755 self.migration = db.migration_create(
756 context.get_admin_context(), migration_values)
757
746 fake_utils.stub_out_utils_execute(self.stubs)758 fake_utils.stub_out_utils_execute(self.stubs)
747 stubs.stub_out_migration_methods(self.stubs)759 stubs.stub_out_migration_methods(self.stubs)
748 stubs.stubout_get_this_vm_uuid(self.stubs)760 stubs.stubout_get_this_vm_uuid(self.stubs)
749 glance_stubs.stubout_glance_client(self.stubs)761 glance_stubs.stubout_glance_client(self.stubs)
750762
751 def test_migrate_disk_and_power_off(self):763 def test_migrate_disk_and_power_off(self):
752 instance = db.instance_create(self.context, self.values)764 instance = db.instance_create(self.context, self.instance_values)
753 stubs.stubout_session(self.stubs, stubs.FakeSessionForMigrationTests)765 stubs.stubout_session(self.stubs, stubs.FakeSessionForMigrationTests)
754 conn = xenapi_conn.get_connection(False)766 conn = xenapi_conn.get_connection(False)
755 conn.migrate_disk_and_power_off(instance, '127.0.0.1')767 conn.migrate_disk_and_power_off(self.context, instance, '127.0.0.1')
756768
757 def test_revert_migrate(self):769 def test_revert_migrate(self):
758 instance = db.instance_create(self.context, self.values)770 instance = db.instance_create(self.context, self.instance_values)
759 self.called = False771 self.called = False
760 self.fake_vm_start_called = False772 self.fake_vm_start_called = False
761 self.fake_revert_migration_called = False773 self.fake_finish_revert_migration_called = False
762774
763 def fake_vm_start(*args, **kwargs):775 def fake_vm_start(*args, **kwargs):
764 self.fake_vm_start_called = True776 self.fake_vm_start_called = True
@@ -766,13 +778,14 @@
766 def fake_vdi_resize(*args, **kwargs):778 def fake_vdi_resize(*args, **kwargs):
767 self.called = True779 self.called = True
768780
769 def fake_revert_migration(*args, **kwargs):781 def fake_finish_revert_migration(*args, **kwargs):
770 self.fake_revert_migration_called = True782 self.fake_finish_revert_migration_called = True
771783
772 self.stubs.Set(stubs.FakeSessionForMigrationTests,784 self.stubs.Set(stubs.FakeSessionForMigrationTests,
773 "VDI_resize_online", fake_vdi_resize)785 "VDI_resize_online", fake_vdi_resize)
774 self.stubs.Set(vmops.VMOps, '_start', fake_vm_start)786 self.stubs.Set(vmops.VMOps, '_start', fake_vm_start)
775 self.stubs.Set(vmops.VMOps, 'revert_migration', fake_revert_migration)787 self.stubs.Set(vmops.VMOps, 'finish_revert_migration',
788 fake_finish_revert_migration)
776789
777 stubs.stubout_session(self.stubs, stubs.FakeSessionForMigrationTests)790 stubs.stubout_session(self.stubs, stubs.FakeSessionForMigrationTests)
778 stubs.stubout_loopingcall_start(self.stubs)791 stubs.stubout_loopingcall_start(self.stubs)
@@ -791,17 +804,17 @@
791 'label': 'fake',804 'label': 'fake',
792 'mac': 'DE:AD:BE:EF:00:00',805 'mac': 'DE:AD:BE:EF:00:00',
793 'rxtx_cap': 3})]806 'rxtx_cap': 3})]
794 conn.finish_migration(self.context, instance,807 conn.finish_migration(self.context, self.migration, instance,
795 dict(base_copy='hurr', cow='durr'),808 dict(base_copy='hurr', cow='durr'),
796 network_info, resize_instance=True)809 network_info, resize_instance=True)
797 self.assertEqual(self.called, True)810 self.assertEqual(self.called, True)
798 self.assertEqual(self.fake_vm_start_called, True)811 self.assertEqual(self.fake_vm_start_called, True)
799812
800 conn.revert_migration(instance)813 conn.finish_revert_migration(instance)
801 self.assertEqual(self.fake_revert_migration_called, True)814 self.assertEqual(self.fake_finish_revert_migration_called, True)
802815
803 def test_finish_migrate(self):816 def test_finish_migrate(self):
804 instance = db.instance_create(self.context, self.values)817 instance = db.instance_create(self.context, self.instance_values)
805 self.called = False818 self.called = False
806 self.fake_vm_start_called = False819 self.fake_vm_start_called = False
807820
@@ -832,7 +845,7 @@
832 'label': 'fake',845 'label': 'fake',
833 'mac': 'DE:AD:BE:EF:00:00',846 'mac': 'DE:AD:BE:EF:00:00',
834 'rxtx_cap': 3})]847 'rxtx_cap': 3})]
835 conn.finish_migration(self.context, instance,848 conn.finish_migration(self.context, self.migration, instance,
836 dict(base_copy='hurr', cow='durr'),849 dict(base_copy='hurr', cow='durr'),
837 network_info, resize_instance=True)850 network_info, resize_instance=True)
838 self.assertEqual(self.called, True)851 self.assertEqual(self.called, True)
@@ -841,8 +854,9 @@
841 def test_finish_migrate_no_local_storage(self):854 def test_finish_migrate_no_local_storage(self):
842 tiny_type_id = \855 tiny_type_id = \
843 instance_types.get_instance_type_by_name('m1.tiny')['id']856 instance_types.get_instance_type_by_name('m1.tiny')['id']
844 self.values.update({'instance_type_id': tiny_type_id, 'local_gb': 0})857 self.instance_values.update({'instance_type_id': tiny_type_id,
845 instance = db.instance_create(self.context, self.values)858 'local_gb': 0})
859 instance = db.instance_create(self.context, self.instance_values)
846860
847 def fake_vdi_resize(*args, **kwargs):861 def fake_vdi_resize(*args, **kwargs):
848 raise Exception("This shouldn't be called")862 raise Exception("This shouldn't be called")
@@ -866,12 +880,12 @@
866 'label': 'fake',880 'label': 'fake',
867 'mac': 'DE:AD:BE:EF:00:00',881 'mac': 'DE:AD:BE:EF:00:00',
868 'rxtx_cap': 3})]882 'rxtx_cap': 3})]
869 conn.finish_migration(self.context, instance,883 conn.finish_migration(self.context, self.migration, instance,
870 dict(base_copy='hurr', cow='durr'),884 dict(base_copy='hurr', cow='durr'),
871 network_info, resize_instance=True)885 network_info, resize_instance=True)
872886
873 def test_finish_migrate_no_resize_vdi(self):887 def test_finish_migrate_no_resize_vdi(self):
874 instance = db.instance_create(self.context, self.values)888 instance = db.instance_create(self.context, self.instance_values)
875889
876 def fake_vdi_resize(*args, **kwargs):890 def fake_vdi_resize(*args, **kwargs):
877 raise Exception("This shouldn't be called")891 raise Exception("This shouldn't be called")
@@ -897,7 +911,7 @@
897 'rxtx_cap': 3})]911 'rxtx_cap': 3})]
898912
899 # Resize instance would be determined by the compute call913 # Resize instance would be determined by the compute call
900 conn.finish_migration(self.context, instance,914 conn.finish_migration(self.context, self.migration, instance,
901 dict(base_copy='hurr', cow='durr'),915 dict(base_copy='hurr', cow='durr'),
902 network_info, resize_instance=False)916 network_info, resize_instance=False)
903917
904918
=== modified file 'nova/tests/xenapi/stubs.py'
--- nova/tests/xenapi/stubs.py 2011-08-28 19:11:14 +0000
+++ nova/tests/xenapi/stubs.py 2011-09-21 21:22:24 +0000
@@ -295,9 +295,12 @@
295 vm['is_control_domain'] = False295 vm['is_control_domain'] = False
296 vm['domid'] = random.randrange(1, 1 << 16)296 vm['domid'] = random.randrange(1, 1 << 16)
297297
298 def VM_set_name_label(self, *args):
299 pass
300
298301
299def stub_out_migration_methods(stubs):302def stub_out_migration_methods(stubs):
300 def fake_get_snapshot(self, instance):303 def fake_create_snapshot(self, instance):
301 return 'vm_ref', dict(image='foo', snap='bar')304 return 'vm_ref', dict(image='foo', snap='bar')
302305
303 @classmethod306 @classmethod
@@ -327,7 +330,7 @@
327 stubs.Set(vmops.VMOps, '_destroy', fake_destroy)330 stubs.Set(vmops.VMOps, '_destroy', fake_destroy)
328 stubs.Set(vm_utils.VMHelper, 'scan_default_sr', fake_sr)331 stubs.Set(vm_utils.VMHelper, 'scan_default_sr', fake_sr)
329 stubs.Set(vm_utils.VMHelper, 'scan_sr', fake_sr)332 stubs.Set(vm_utils.VMHelper, 'scan_sr', fake_sr)
330 stubs.Set(vmops.VMOps, '_get_snapshot', fake_get_snapshot)333 stubs.Set(vmops.VMOps, '_create_snapshot', fake_create_snapshot)
331 stubs.Set(vm_utils.VMHelper, 'get_vdi_for_vm_safely', fake_get_vdi)334 stubs.Set(vm_utils.VMHelper, 'get_vdi_for_vm_safely', fake_get_vdi)
332 stubs.Set(xenapi_conn.XenAPISession, 'wait_for_task', lambda x, y, z: None)335 stubs.Set(xenapi_conn.XenAPISession, 'wait_for_task', lambda x, y, z: None)
333 stubs.Set(vm_utils.VMHelper, 'get_sr_path', fake_get_sr_path)336 stubs.Set(vm_utils.VMHelper, 'get_sr_path', fake_get_sr_path)
334337
=== modified file 'nova/utils.py'
--- nova/utils.py 2011-09-20 20:21:06 +0000
+++ nova/utils.py 2011-09-21 21:22:24 +0000
@@ -912,6 +912,20 @@
912 return [{label: x} for x in lst]912 return [{label: x} for x in lst]
913913
914914
915def timefunc(func):
916 """Decorator that logs how long a particular function took to execute"""
917 @functools.wraps(func)
918 def inner(*args, **kwargs):
919 start_time = time.time()
920 try:
921 return func(*args, **kwargs)
922 finally:
923 total_time = time.time() - start_time
924 LOG.debug(_("timefunc: '%(name)s' took %(total_time).2f secs") %
925 dict(name=func.__name__, total_time=total_time))
926 return inner
927
928
915def generate_glance_url():929def generate_glance_url():
916 """Generate the URL to glance."""930 """Generate the URL to glance."""
917 # TODO(jk0): This will eventually need to take SSL into consideration931 # TODO(jk0): This will eventually need to take SSL into consideration
918932
=== modified file 'nova/virt/driver.py'
--- nova/virt/driver.py 2011-09-21 15:54:30 +0000
+++ nova/virt/driver.py 2011-09-21 21:22:24 +0000
@@ -225,12 +225,11 @@
225 """225 """
226 raise NotImplementedError()226 raise NotImplementedError()
227227
228 def migrate_disk_and_power_off(self, instance, dest):228 def migrate_disk_and_power_off(self, context, instance, dest):
229 """229 """
230 Transfers the disk of a running instance in multiple phases, turning230 Transfers the disk of a running instance in multiple phases, turning
231 off the instance before the end.231 off the instance before the end.
232 """232 """
233 # TODO(Vek): Need to pass context in for access to auth_token
234 raise NotImplementedError()233 raise NotImplementedError()
235234
236 def snapshot(self, context, instance, image_id):235 def snapshot(self, context, instance, image_id):
@@ -244,8 +243,8 @@
244 """243 """
245 raise NotImplementedError()244 raise NotImplementedError()
246245
247 def finish_migration(self, context, instance, disk_info, network_info,246 def finish_migration(self, context, migration, instance, disk_info,
248 resize_instance):247 network_info, resize_instance):
249 """Completes a resize, turning on the migrated instance248 """Completes a resize, turning on the migrated instance
250249
251 :param network_info:250 :param network_info:
@@ -253,8 +252,13 @@
253 """252 """
254 raise NotImplementedError()253 raise NotImplementedError()
255254
256 def revert_migration(self, instance):255 def confirm_migration(self, migration, instance, network_info):
257 """Reverts a resize, powering back on the instance"""256 """Confirms a resize, destroying the source VM"""
257 # TODO(Vek): Need to pass context in for access to auth_token
258 raise NotImplementedError()
259
260 def finish_revert_migration(self, instance):
261 """Finish reverting a resize, powering back on the instance"""
258 # TODO(Vek): Need to pass context in for access to auth_token262 # TODO(Vek): Need to pass context in for access to auth_token
259 raise NotImplementedError()263 raise NotImplementedError()
260264
261265
=== modified file 'nova/virt/fake.py'
--- nova/virt/fake.py 2011-09-15 18:44:49 +0000
+++ nova/virt/fake.py 2011-09-21 21:22:24 +0000
@@ -130,12 +130,12 @@
130 def poll_rescued_instances(self, timeout):130 def poll_rescued_instances(self, timeout):
131 pass131 pass
132132
133 def migrate_disk_and_power_off(self, context, instance, dest):
134 pass
135
133 def poll_unconfirmed_resizes(self, resize_confirm_window):136 def poll_unconfirmed_resizes(self, resize_confirm_window):
134 pass137 pass
135138
136 def migrate_disk_and_power_off(self, instance, dest):
137 pass
138
139 def pause(self, instance, callback):139 def pause(self, instance, callback):
140 pass140 pass
141141
142142
=== modified file 'nova/virt/xenapi/vm_utils.py'
--- nova/virt/xenapi/vm_utils.py 2011-09-10 17:56:54 +0000
+++ nova/virt/xenapi/vm_utils.py 2011-09-21 21:22:24 +0000
@@ -695,6 +695,10 @@
695 return is_pv695 return is_pv
696696
697 @classmethod697 @classmethod
698 def set_vm_name_label(cls, session, vm_ref, name_label):
699 session.get_xenapi().VM.set_name_label(vm_ref, name_label)
700
701 @classmethod
698 def lookup(cls, session, name_label):702 def lookup(cls, session, name_label):
699 """Look the instance i up, and returns it if available"""703 """Look the instance i up, and returns it if available"""
700 vm_refs = session.get_xenapi().VM.get_by_name_label(name_label)704 vm_refs = session.get_xenapi().VM.get_by_name_label(name_label)
701705
=== modified file 'nova/virt/xenapi/vmops.py'
--- nova/virt/xenapi/vmops.py 2011-09-21 15:54:30 +0000
+++ nova/virt/xenapi/vmops.py 2011-09-21 21:22:24 +0000
@@ -56,6 +56,9 @@
56 'nova.virt.xenapi.vif.XenAPIBridgeDriver',56 'nova.virt.xenapi.vif.XenAPIBridgeDriver',
57 'The XenAPI VIF driver using XenServer Network APIs.')57 'The XenAPI VIF driver using XenServer Network APIs.')
5858
59RESIZE_TOTAL_STEPS = 5
60BUILD_TOTAL_STEPS = 4
61
5962
60def cmp_version(a, b):63def cmp_version(a, b):
61 """Compare two version strings (eg 0.0.1.10 > 0.0.1.9)"""64 """Compare two version strings (eg 0.0.1.10 > 0.0.1.9)"""
@@ -111,20 +114,40 @@
111 instance_infos.append(instance_info)114 instance_infos.append(instance_info)
112 return instance_infos115 return instance_infos
113116
114 def revert_migration(self, instance):117 def confirm_migration(self, migration, instance, network_info):
115 vm_ref = VMHelper.lookup(self._session, instance.name)118 name_label = self._get_orig_vm_name_label(instance)
119 vm_ref = VMHelper.lookup(self._session, name_label)
120 return self._destroy(instance, vm_ref, network_info, shutdown=False)
121
122 def finish_revert_migration(self, instance):
123 # NOTE(sirp): the original vm was suffixed with '-orig'; find it using
124 # the old suffix, remove the suffix, then power it back on.
125 name_label = self._get_orig_vm_name_label(instance)
126 vm_ref = VMHelper.lookup(self._session, name_label)
127
128 # Remove the '-orig' suffix (which was added in case the resized VM
129 # ends up on the source host, common during testing)
130 name_label = instance.name
131 VMHelper.set_vm_name_label(self._session, vm_ref, name_label)
132
116 self._start(instance, vm_ref)133 self._start(instance, vm_ref)
117134
118 def finish_migration(self, context, instance, disk_info, network_info,135 def finish_migration(self, context, migration, instance, disk_info,
119 resize_instance):136 network_info, resize_instance):
120 vdi_uuid = self.link_disks(instance, disk_info['base_copy'],137 vdi_uuid = self.link_disks(instance, disk_info['base_copy'],
121 disk_info['cow'])138 disk_info['cow'])
139
122 vm_ref = self._create_vm(context, instance,140 vm_ref = self._create_vm(context, instance,
123 [dict(vdi_type='os', vdi_uuid=vdi_uuid)],141 [dict(vdi_type='os', vdi_uuid=vdi_uuid)],
124 network_info)142 network_info)
125 if resize_instance:143 if resize_instance:
126 self.resize_instance(instance, vdi_uuid)144 self.resize_instance(instance, vdi_uuid)
145
146 # 5. Start VM
127 self._start(instance, vm_ref=vm_ref)147 self._start(instance, vm_ref=vm_ref)
148 self._update_instance_progress(context, instance,
149 step=5,
150 total_steps=RESIZE_TOTAL_STEPS)
128151
129 def _start(self, instance, vm_ref=None):152 def _start(self, instance, vm_ref=None):
130 """Power on a VM instance"""153 """Power on a VM instance"""
@@ -147,9 +170,35 @@
147 def spawn(self, context, instance, network_info):170 def spawn(self, context, instance, network_info):
148 vdis = None171 vdis = None
149 try:172 try:
173 # 1. Vanity Step
174 # NOTE(sirp): _create_disk will potentially take a *very* long
175 # time to complete since it has to fetch the image over the
176 # network and images can be several gigs in size. To avoid
177 # progress remaining at 0% for too long, which will appear to be
178 # an error, we insert a "vanity" step to bump the progress up one
179 # notch above 0.
180 self._update_instance_progress(context, instance,
181 step=1,
182 total_steps=BUILD_TOTAL_STEPS)
183
184 # 2. Fetch the Image over the Network
150 vdis = self._create_disks(context, instance)185 vdis = self._create_disks(context, instance)
186 self._update_instance_progress(context, instance,
187 step=2,
188 total_steps=BUILD_TOTAL_STEPS)
189
190 # 3. Create the VM records
151 vm_ref = self._create_vm(context, instance, vdis, network_info)191 vm_ref = self._create_vm(context, instance, vdis, network_info)
192 self._update_instance_progress(context, instance,
193 step=3,
194 total_steps=BUILD_TOTAL_STEPS)
195
196 # 4. Boot the Instance
152 self._spawn(instance, vm_ref)197 self._spawn(instance, vm_ref)
198 self._update_instance_progress(context, instance,
199 step=4,
200 total_steps=BUILD_TOTAL_STEPS)
201
153 except (self.XenAPI.Failure, OSError, IOError) as spawn_error:202 except (self.XenAPI.Failure, OSError, IOError) as spawn_error:
154 LOG.exception(_("instance %s: Failed to spawn"),203 LOG.exception(_("instance %s: Failed to spawn"),
155 instance.id, exc_info=sys.exc_info())204 instance.id, exc_info=sys.exc_info())
@@ -169,7 +218,7 @@
169 if vm_ref is not None:218 if vm_ref is not None:
170 raise exception.InstanceExists(name=instance_name)219 raise exception.InstanceExists(name=instance_name)
171220
172 #ensure enough free memory is available221 # Ensure enough free memory is available
173 if not VMHelper.ensure_free_mem(self._session, instance):222 if not VMHelper.ensure_free_mem(self._session, instance):
174 LOG.exception(_('instance %(instance_name)s: not enough free '223 LOG.exception(_('instance %(instance_name)s: not enough free '
175 'memory') % locals())224 'memory') % locals())
@@ -501,7 +550,8 @@
501 """550 """
502 template_vm_ref = None551 template_vm_ref = None
503 try:552 try:
504 template_vm_ref, template_vdi_uuids = self._get_snapshot(instance)553 template_vm_ref, template_vdi_uuids =\
554 self._create_snapshot(instance)
505 # call plugin to ship snapshot off to glance555 # call plugin to ship snapshot off to glance
506 VMHelper.upload_image(context,556 VMHelper.upload_image(context,
507 self._session, instance, template_vdi_uuids, image_id)557 self._session, instance, template_vdi_uuids, image_id)
@@ -512,7 +562,7 @@
512562
513 logging.debug(_("Finished snapshot and upload for VM %s"), instance)563 logging.debug(_("Finished snapshot and upload for VM %s"), instance)
514564
515 def _get_snapshot(self, instance):565 def _create_snapshot(self, instance):
516 #TODO(sirp): Add quiesce and VSS locking support when Windows support566 #TODO(sirp): Add quiesce and VSS locking support when Windows support
517 # is added567 # is added
518568
@@ -529,7 +579,39 @@
529 % locals())579 % locals())
530 return580 return
531581
532 def migrate_disk_and_power_off(self, instance, dest):582 def _migrate_vhd(self, instance, vdi_uuid, dest, sr_path):
583 instance_id = instance.id
584 params = {'host': dest,
585 'vdi_uuid': vdi_uuid,
586 'instance_id': instance_id,
587 'sr_path': sr_path}
588
589 task = self._session.async_call_plugin('migration', 'transfer_vhd',
590 {'params': pickle.dumps(params)})
591 self._session.wait_for_task(task, instance_id)
592
593 def _get_orig_vm_name_label(self, instance):
594 return instance.name + '-orig'
595
596 def _update_instance_progress(self, context, instance, step, total_steps):
597 """Update instance progress percent to reflect current step number
598 """
599 # FIXME(sirp): for now we're taking a KISS approach to instance
600 # progress:
601 # Divide the action's workflow into discrete steps and "bump" the
602 # instance's progress field as each step is completed.
603 #
604 # For a first cut this should be fine, however, for large VM images,
605 # the _create_disks step begins to dominate the equation. A
606 # better approximation would use the percentage of the VM image that
607 # has been streamed to the destination host.
608 progress = round(float(step) / total_steps * 100)
609 instance_id = instance['id']
610 LOG.debug(_("Updating instance '%(instance_id)s' progress to"
611 " %(progress)d") % locals())
612 db.instance_update(context, instance_id, {'progress': progress})
613
614 def migrate_disk_and_power_off(self, context, instance, dest):
533 """Copies a VHD from one host machine to another.615 """Copies a VHD from one host machine to another.
534616
535 :param instance: the instance that owns the VHD in question.617 :param instance: the instance that owns the VHD in question.
@@ -537,6 +619,11 @@
537 :param disk_type: values are 'primary' or 'cow'.619 :param disk_type: values are 'primary' or 'cow'.
538620
539 """621 """
622 # 0. Zero out the progress to begin
623 self._update_instance_progress(context, instance,
624 step=0,
625 total_steps=RESIZE_TOTAL_STEPS)
626
540 vm_ref = VMHelper.lookup(self._session, instance.name)627 vm_ref = VMHelper.lookup(self._session, instance.name)
541628
542 # The primary VDI becomes the COW after the snapshot, and we can629 # The primary VDI becomes the COW after the snapshot, and we can
@@ -546,34 +633,43 @@
546 base_copy_uuid = cow_uuid = None633 base_copy_uuid = cow_uuid = None
547 template_vdi_uuids = template_vm_ref = None634 template_vdi_uuids = template_vm_ref = None
548 try:635 try:
549 # transfer the base copy636 # 1. Create Snapshot
550 template_vm_ref, template_vdi_uuids = self._get_snapshot(instance)637 template_vm_ref, template_vdi_uuids =\
638 self._create_snapshot(instance)
639 self._update_instance_progress(context, instance,
640 step=1,
641 total_steps=RESIZE_TOTAL_STEPS)
642
551 base_copy_uuid = template_vdi_uuids['image']643 base_copy_uuid = template_vdi_uuids['image']
552 vdi_ref, vm_vdi_rec = \644 vdi_ref, vm_vdi_rec = \
553 VMHelper.get_vdi_for_vm_safely(self._session, vm_ref)645 VMHelper.get_vdi_for_vm_safely(self._session, vm_ref)
554 cow_uuid = vm_vdi_rec['uuid']646 cow_uuid = vm_vdi_rec['uuid']
555647
556 params = {'host': dest,648 sr_path = VMHelper.get_sr_path(self._session)
557 'vdi_uuid': base_copy_uuid,649
558 'instance_id': instance.id,650 # 2. Transfer the base copy
559 'sr_path': VMHelper.get_sr_path(self._session)}651 self._migrate_vhd(instance, base_copy_uuid, dest, sr_path)
560652 self._update_instance_progress(context, instance,
561 task = self._session.async_call_plugin('migration', 'transfer_vhd',653 step=2,
562 {'params': pickle.dumps(params)})654 total_steps=RESIZE_TOTAL_STEPS)
563 self._session.wait_for_task(task, instance.id)655
564656 # 3. Now power down the instance
565 # Now power down the instance and transfer the COW VHD
566 self._shutdown(instance, vm_ref, hard=False)657 self._shutdown(instance, vm_ref, hard=False)
567658 self._update_instance_progress(context, instance,
568 params = {'host': dest,659 step=3,
569 'vdi_uuid': cow_uuid,660 total_steps=RESIZE_TOTAL_STEPS)
570 'instance_id': instance.id,661
571 'sr_path': VMHelper.get_sr_path(self._session), }662 # 4. Transfer the COW VHD
572663 self._migrate_vhd(instance, cow_uuid, dest, sr_path)
573 task = self._session.async_call_plugin('migration', 'transfer_vhd',664 self._update_instance_progress(context, instance,
574 {'params': pickle.dumps(params)})665 step=4,
575 self._session.wait_for_task(task, instance.id)666 total_steps=RESIZE_TOTAL_STEPS)
576667
668 # NOTE(sirp): in case we're resizing to the same host (for dev
669 # purposes), apply a suffix to name-label so the two VM records
670 # extant until a confirm_resize don't collide.
671 name_label = self._get_orig_vm_name_label(instance)
672 VMHelper.set_vm_name_label(self._session, vm_ref, name_label)
577 finally:673 finally:
578 if template_vm_ref:674 if template_vm_ref:
579 self._destroy(instance, template_vm_ref,675 self._destroy(instance, template_vm_ref,
580676
=== modified file 'nova/virt/xenapi_conn.py'
--- nova/virt/xenapi_conn.py 2011-09-21 15:54:30 +0000
+++ nova/virt/xenapi_conn.py 2011-09-21 21:22:24 +0000
@@ -189,14 +189,19 @@
189 """Create VM instance"""189 """Create VM instance"""
190 self._vmops.spawn(context, instance, network_info)190 self._vmops.spawn(context, instance, network_info)
191191
192 def revert_migration(self, instance):192 def confirm_migration(self, migration, instance, network_info):
193 """Reverts a resize, powering back on the instance"""193 """Confirms a resize, destroying the source VM"""
194 self._vmops.revert_migration(instance)194 # TODO(Vek): Need to pass context in for access to auth_token
195195 self._vmops.confirm_migration(migration, instance, network_info)
196 def finish_migration(self, context, instance, disk_info, network_info,196
197 resize_instance=False):197 def finish_revert_migration(self, instance):
198 """Finish reverting a resize, powering back on the instance"""
199 self._vmops.finish_revert_migration(instance)
200
201 def finish_migration(self, context, migration, instance, disk_info,
202 network_info, resize_instance=False):
198 """Completes a resize, turning on the migrated instance"""203 """Completes a resize, turning on the migrated instance"""
199 self._vmops.finish_migration(context, instance, disk_info,204 self._vmops.finish_migration(context, migration, instance, disk_info,
200 network_info, resize_instance)205 network_info, resize_instance)
201206
202 def snapshot(self, context, instance, image_id):207 def snapshot(self, context, instance, image_id):
@@ -229,10 +234,10 @@
229 """Unpause paused VM instance"""234 """Unpause paused VM instance"""
230 self._vmops.unpause(instance, callback)235 self._vmops.unpause(instance, callback)
231236
232 def migrate_disk_and_power_off(self, instance, dest):237 def migrate_disk_and_power_off(self, context, instance, dest):
233 """Transfers the VHD of a running instance to another host, then shuts238 """Transfers the VHD of a running instance to another host, then shuts
234 off the instance copies over the COW disk"""239 off the instance copies over the COW disk"""
235 return self._vmops.migrate_disk_and_power_off(instance, dest)240 return self._vmops.migrate_disk_and_power_off(context, instance, dest)
236241
237 def suspend(self, instance, callback):242 def suspend(self, instance, callback):
238 """suspend the specified instance"""243 """suspend the specified instance"""
239244
=== modified file 'plugins/xenserver/xenapi/etc/xapi.d/plugins/glance'
--- plugins/xenserver/xenapi/etc/xapi.d/plugins/glance 2011-09-10 17:56:54 +0000
+++ plugins/xenserver/xenapi/etc/xapi.d/plugins/glance 2011-09-21 21:22:24 +0000
@@ -211,7 +211,11 @@
211 snap_info = prepare_if_exists(staging_path, 'snap.vhd',211 snap_info = prepare_if_exists(staging_path, 'snap.vhd',
212 image_info[0])212 image_info[0])
213 if snap_info:213 if snap_info:
214 paths_to_move.append(snap_info[0])214 # NOTE(sirp): this is an insert rather than an append since the
215 # 'snapshot' vhd needs to be copied into the SR before the base copy.
216 # If it doesn't, then there is a possibliity that snapwatchd will
217 # delete the base_copy since it is an unreferenced parent.
218 paths_to_move.insert(0, snap_info[0])
215 # We return this snap as the VDI instead of image.vhd219 # We return this snap as the VDI instead of image.vhd
216 vdi_return_list.append(dict(vdi_type="os", vdi_uuid=snap_info[1]))220 vdi_return_list.append(dict(vdi_type="os", vdi_uuid=snap_info[1]))
217 else:221 else:
218222
=== modified file 'plugins/xenserver/xenapi/etc/xapi.d/plugins/migration'
--- plugins/xenserver/xenapi/etc/xapi.d/plugins/migration 2011-07-20 15:16:36 +0000
+++ plugins/xenserver/xenapi/etc/xapi.d/plugins/migration 2011-09-21 21:22:24 +0000
@@ -48,7 +48,7 @@
4848
49 # Discover the copied VHDs locally, and then set up paths to copy49 # Discover the copied VHDs locally, and then set up paths to copy
50 # them to under the SR50 # them to under the SR
51 source_image_path = "%s/instance%d" % ('/images/', instance_id)51 source_image_path = "/images/instance%d" % instance_id
52 source_base_copy_path = "%s/%s.vhd" % (source_image_path,52 source_base_copy_path = "%s/%s.vhd" % (source_image_path,
53 old_base_copy_uuid)53 old_base_copy_uuid)
54 source_cow_path = "%s/%s.vhd" % (source_image_path, old_cow_uuid)54 source_cow_path = "%s/%s.vhd" % (source_image_path, old_cow_uuid)
@@ -74,9 +74,12 @@
74 (new_cow_path, new_base_copy_path))74 (new_cow_path, new_base_copy_path))
75 subprocess.call(shlex.split('/usr/sbin/vhd-util modify -n %s -p %s' %75 subprocess.call(shlex.split('/usr/sbin/vhd-util modify -n %s -p %s' %
76 (new_cow_path, new_base_copy_path)))76 (new_cow_path, new_base_copy_path)))
77
77 logging.debug('Moving VHDs into SR %s' % sr_path)78 logging.debug('Moving VHDs into SR %s' % sr_path)
78 shutil.move("%s/%s.vhd" % (temp_vhd_path, new_base_copy_uuid), sr_path)79 # NOTE(sirp): COW should be copied before base_copy to avoid snapwatchd
79 shutil.move("%s/%s.vhd" % (temp_vhd_path, new_cow_uuid), sr_path)80 # GC'ing an unreferenced base copy VDI
81 shutil.move(new_cow_path, sr_path)
82 shutil.move(new_base_copy_path, sr_path)
8083
81 logging.debug('Cleaning up temporary SR path %s' % temp_vhd_path)84 logging.debug('Cleaning up temporary SR path %s' % temp_vhd_path)
82 os.rmdir(temp_vhd_path)85 os.rmdir(temp_vhd_path)
@@ -93,7 +96,7 @@
93 vhd_path = "%s.vhd" % vdi_uuid96 vhd_path = "%s.vhd" % vdi_uuid
9497
95 source_path = "%s/%s" % (sr_path, vhd_path)98 source_path = "%s/%s" % (sr_path, vhd_path)
96 dest_path = '%s:%sinstance%d/' % (host, '/images/', instance_id)99 dest_path = '%s:/images/instance%d/' % (host, instance_id)
97100
98 logging.debug("Preparing to transmit %s to %s" % (source_path,101 logging.debug("Preparing to transmit %s to %s" % (source_path,
99 dest_path))102 dest_path))