Merge lp:~eday/nova/compute-abstraction into lp:~hudson-openstack/nova/trunk

Proposed by Eric Day
Status: Merged
Approved by: Vish Ishaya
Approved revision: 427
Merged at revision: 437
Proposed branch: lp:~eday/nova/compute-abstraction
Merge into: lp:~hudson-openstack/nova/trunk
Diff against target: 953 lines (+347/-301)
12 files modified
nova/api/ec2/cloud.py (+31/-148)
nova/api/openstack/servers.py (+14/-84)
nova/compute/api.py (+212/-0)
nova/compute/instance_types.py (+20/-0)
nova/compute/manager.py (+1/-48)
nova/db/base.py (+36/-0)
nova/manager.py (+5/-9)
nova/quota.py (+5/-0)
nova/tests/api/openstack/fakes.py (+1/-1)
nova/tests/api/openstack/test_servers.py (+6/-0)
nova/tests/compute_unittest.py (+7/-4)
nova/tests/quota_unittest.py (+9/-7)
To merge this branch: bzr merge lp:~eday/nova/compute-abstraction
Reviewer Review Type Date Requested Status
Michael Gundlach (community) Approve
Vish Ishaya (community) Approve
Soren Hansen (community) Approve
Review via email: mp+41805@code.launchpad.net

Description of the change

Consolidated the start instance logic in the two API classes into a single method. This also cleans up a number of small discrepencies between the two.

To post a comment you must log in.
Revision history for this message
Vish Ishaya (vishvananda) wrote :

Questions on irc were addressed by 424. LGTM. We definitely need to clean up cloud unittests.

review: Approve
Revision history for this message
Soren Hansen (soren) wrote :

2010/11/25 Vish Ishaya <email address hidden>:
> Review: Approve
> Questions on irc were addressed by 424.  LGTM.  We definitely need to clean up cloud unittests.

It would probably be useful to post the relevant IRC conversation
here. I'd like to know what you talked about, but I'm not really
interested in digging through hours worth of irc logs. :)

--
Soren Hansen
Ubuntu Developer    http://www.ubuntu.com/
OpenStack Developer http://www.openstack.org/

Revision history for this message
Soren Hansen (soren) wrote :
Download full text (3.9 KiB)

2010/11/24 Eric Day <email address hidden>:
> === modified file 'nova/api/ec2/cloud.py'
> --- nova/api/ec2/cloud.py       2010-11-18 21:27:52 +0000
> +++ nova/api/ec2/cloud.py       2010-11-24 22:55:33 +0000
> @@ -39,7 +39,7 @@
>  from nova import quota
>  from nova import rpc
>  from nova import utils
> -from nova.compute.instance_types import INSTANCE_TYPES
> +from nova.compute import instance_types

This needs to move out of code anyway. Filed bug #681411.

> @@ -260,7 +255,7 @@
>         return True
>
>     def describe_security_groups(self, context, group_name=None, **kwargs):
> -        self._ensure_default_security_group(context)
> +        self.compute_manager.ensure_default_security_group(context)
>         if context.user.is_admin():
>             groups = db.security_group_get_all(context)
>         else:

I understand the motivation to consolidate this code in the compute
manager. I just think that instances launched through the EC2 ÁPI should
land in one security group by default and instances launched through the
OpenStack API should land in another by default. EC2 restricts all
access to instances by default, while Rackspace has traditionally left
them open, leaving it to the owner of the instance to shield it off.

I filed bug #681416 to track this.

> @@ -505,9 +500,8 @@
>         if quota.allowed_volumes(context, 1, size) < 1:
>             logging.warn("Quota exceeeded for %s, tried to create %sG volume",
>                          context.project_id, size)
> -            raise QuotaError("Volume quota exceeded. You cannot "
> -                             "create a volume of size %s" %
> -                             size)
> +            raise quota.QuotaError("Volume quota exceeded. You cannot "
> +                                   "create a volume of size %s" % size)

We should include a unit, not just the number. Perhaps the user thinks
he's creating a 1000 MB volume, but we're actually blocking him from
creating a 1000 GB volume.

Filed bug #681417.

> === modified file 'nova/compute/manager.py'
> --- nova/compute/manager.py     2010-11-03 22:06:00 +0000
> +++ nova/compute/manager.py     2010-11-24 22:55:33 +0000
> @@ -36,13 +36,18 @@
>
>  import logging +import time
>
>  from twisted.internet import defer
>
> +from nova import db
>  from nova import exception
>  from nova import flags
>  from nova import manager
> +from nova import quota
> +from nova import rpc
>  from nova import utils
> +from nova.compute import instance_types
>  from nova.compute import power_state
>
>
> @@ -53,6 +58,11 @@
>                     'Driver to use for volume creation')
>
>
> +def generate_default_hostname(internal_id):
> +    """Default function to generate a hostname given an instance reference."""
> +    return str(internal_id)
> +
> +
>  class ComputeManager(manager.Manager):
>     """Manages the running instances from creation to destruction."""
>
> @@ -84,6 +94,126 @@
>         """This call passes stright through to the virtualization driver."""
>         yield self.driver.refresh_security_group(security_group_id)
>
> +    # TODO(eday): network_topic arg should go away once we push network
> +    # allocation into the scheduler or ...

Read more...

Revision history for this message
Eric Day (eday) wrote :

I agree on all the points regarding security groups and fixing other things you've filed bugs for. I was trying to just shuffle code around so we only need to edit things in one place, and not actually change the logic.

As far as putting the code in compute manager, I started by creating nova.compute.api, but this felt weird too. It actually made sense to me in the end to put them right next to each other, just knowing they will be on opposite ends of the worker. Perhaps we could just split the class and have both in the manager.py file. I'm open.

Revision history for this message
Soren Hansen (soren) wrote :

Separate classes in the same file sounds good. It also lets us to things in the individual __init__ methods without worrying it might get run in the wrong context.

The bugs I filed and mentioned weren't meant as comments on your code. It was just stuff I stumbled upon while reading your diff. I guess I should have left it out to avoid the confusion. Just ignore it. :)

Revision history for this message
Vish Ishaya (vishvananda) wrote :

The stuff in irc was simply noticing that exception handling on image_get wasn't handled consistently. Giving the workers an api seems reasonable to me. I think it is easier conceptually than having the same class work both locally and remote.

How do you see this working for the volume? Code from volume_manager runs on the api host (create the db record), the compute host (discover the volume), and the volume host (create and export the volume). Do we have three separate classes? Or would VolumeAPI encompass all of the functions that are called by other workers (the api host and compute host code)?

Regardless, we need move the other workers code into this format ASAP.

review: Approve
Revision history for this message
Soren Hansen (soren) wrote :

This is great stuff! Thanks!

review: Approve
Revision history for this message
OpenStack Infra (hudson-openstack) wrote :

Attempt to merge into lp:nova failed due to conflicts:

text conflict in nova/api/openstack/servers.py
text conflict in nova/compute/manager.py

Revision history for this message
OpenStack Infra (hudson-openstack) wrote :

There are additional revisions which have not been approved in review. Please seek review and approval of these new revisions.

Revision history for this message
Vish Ishaya (vishvananda) wrote :

lgtm

review: Approve
Revision history for this message
OpenStack Infra (hudson-openstack) wrote :

Attempt to merge into lp:nova failed due to conflicts:

text conflict in nova/compute/manager.py

Revision history for this message
Michael Gundlach (gundlach) wrote :

465:s/quote/quota/ and shouldn't that be in multiline triple-quote form?

763:isn't it a pep8 violation to not have blank lines b/w class and comment? also, i'm not sure i like having an extra base class just to set 'self.db', versus making a helper function in the db module that can be called explicitly to set self.db. but that's bikeshedding so i'm ok with how it is now too.

lgtm.

review: Approve

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
=== modified file 'nova/api/ec2/cloud.py'
--- nova/api/ec2/cloud.py 2010-11-30 08:19:32 +0000
+++ nova/api/ec2/cloud.py 2010-12-02 17:38:10 +0000
@@ -39,7 +39,8 @@
39from nova import quota39from nova import quota
40from nova import rpc40from nova import rpc
41from nova import utils41from nova import utils
42from nova.compute.instance_types import INSTANCE_TYPES42from nova.compute import api as compute_api
43from nova.compute import instance_types
43from nova.api import cloud44from nova.api import cloud
44from nova.image.s3 import S3ImageService45from nova.image.s3 import S3ImageService
4546
@@ -50,11 +51,6 @@
50InvalidInputException = exception.InvalidInputException51InvalidInputException = exception.InvalidInputException
5152
5253
53class QuotaError(exception.ApiError):
54 """Quota Exceeeded"""
55 pass
56
57
58def _gen_key(context, user_id, key_name):54def _gen_key(context, user_id, key_name):
59 """Generate a key55 """Generate a key
6056
@@ -99,7 +95,7 @@
99"""95"""
100 def __init__(self):96 def __init__(self):
101 self.network_manager = utils.import_object(FLAGS.network_manager)97 self.network_manager = utils.import_object(FLAGS.network_manager)
102 self.compute_manager = utils.import_object(FLAGS.compute_manager)98 self.compute_api = compute_api.ComputeAPI()
103 self.image_service = S3ImageService()99 self.image_service = S3ImageService()
104 self.setup()100 self.setup()
105101
@@ -127,7 +123,7 @@
127 for instance in db.instance_get_all_by_project(context, project_id):123 for instance in db.instance_get_all_by_project(context, project_id):
128 if instance['fixed_ip']:124 if instance['fixed_ip']:
129 line = '%s slots=%d' % (instance['fixed_ip']['address'],125 line = '%s slots=%d' % (instance['fixed_ip']['address'],
130 INSTANCE_TYPES[instance['instance_type']]['vcpus'])126 instance['vcpus'])
131 key = str(instance['key_name'])127 key = str(instance['key_name'])
132 if key in result:128 if key in result:
133 result[key].append(line)129 result[key].append(line)
@@ -260,7 +256,7 @@
260 return True256 return True
261257
262 def describe_security_groups(self, context, group_name=None, **kwargs):258 def describe_security_groups(self, context, group_name=None, **kwargs):
263 self._ensure_default_security_group(context)259 self.compute_api.ensure_default_security_group(context)
264 if context.user.is_admin():260 if context.user.is_admin():
265 groups = db.security_group_get_all(context)261 groups = db.security_group_get_all(context)
266 else:262 else:
@@ -358,7 +354,7 @@
358 return False354 return False
359355
360 def revoke_security_group_ingress(self, context, group_name, **kwargs):356 def revoke_security_group_ingress(self, context, group_name, **kwargs):
361 self._ensure_default_security_group(context)357 self.compute_api.ensure_default_security_group(context)
362 security_group = db.security_group_get_by_name(context,358 security_group = db.security_group_get_by_name(context,
363 context.project_id,359 context.project_id,
364 group_name)360 group_name)
@@ -383,7 +379,7 @@
383 # for these operations, so support for newer API versions379 # for these operations, so support for newer API versions
384 # is sketchy.380 # is sketchy.
385 def authorize_security_group_ingress(self, context, group_name, **kwargs):381 def authorize_security_group_ingress(self, context, group_name, **kwargs):
386 self._ensure_default_security_group(context)382 self.compute_api.ensure_default_security_group(context)
387 security_group = db.security_group_get_by_name(context,383 security_group = db.security_group_get_by_name(context,
388 context.project_id,384 context.project_id,
389 group_name)385 group_name)
@@ -419,7 +415,7 @@
419 return source_project_id415 return source_project_id
420416
421 def create_security_group(self, context, group_name, group_description):417 def create_security_group(self, context, group_name, group_description):
422 self._ensure_default_security_group(context)418 self.compute_api.ensure_default_security_group(context)
423 if db.security_group_exists(context, context.project_id, group_name):419 if db.security_group_exists(context, context.project_id, group_name):
424 raise exception.ApiError('group %s already exists' % group_name)420 raise exception.ApiError('group %s already exists' % group_name)
425421
@@ -505,9 +501,8 @@
505 if quota.allowed_volumes(context, 1, size) < 1:501 if quota.allowed_volumes(context, 1, size) < 1:
506 logging.warn("Quota exceeeded for %s, tried to create %sG volume",502 logging.warn("Quota exceeeded for %s, tried to create %sG volume",
507 context.project_id, size)503 context.project_id, size)
508 raise QuotaError("Volume quota exceeded. You cannot "504 raise quota.QuotaError("Volume quota exceeded. You cannot "
509 "create a volume of size %s" %505 "create a volume of size %s" % size)
510 size)
511 vol = {}506 vol = {}
512 vol['size'] = size507 vol['size'] = size
513 vol['user_id'] = context.user.id508 vol['user_id'] = context.user.id
@@ -699,8 +694,8 @@
699 if quota.allowed_floating_ips(context, 1) < 1:694 if quota.allowed_floating_ips(context, 1) < 1:
700 logging.warn("Quota exceeeded for %s, tried to allocate address",695 logging.warn("Quota exceeeded for %s, tried to allocate address",
701 context.project_id)696 context.project_id)
702 raise QuotaError("Address quota exceeded. You cannot "697 raise quota.QuotaError("Address quota exceeded. You cannot "
703 "allocate any more addresses")698 "allocate any more addresses")
704 network_topic = self._get_network_topic(context)699 network_topic = self._get_network_topic(context)
705 public_ip = rpc.call(context,700 public_ip = rpc.call(context,
706 network_topic,701 network_topic,
@@ -752,137 +747,25 @@
752 "args": {"network_id": network_ref['id']}})747 "args": {"network_id": network_ref['id']}})
753 return db.queue_get_for(context, FLAGS.network_topic, host)748 return db.queue_get_for(context, FLAGS.network_topic, host)
754749
755 def _ensure_default_security_group(self, context):
756 try:
757 db.security_group_get_by_name(context,
758 context.project_id,
759 'default')
760 except exception.NotFound:
761 values = {'name': 'default',
762 'description': 'default',
763 'user_id': context.user.id,
764 'project_id': context.project_id}
765 group = db.security_group_create(context, values)
766
767 def run_instances(self, context, **kwargs):750 def run_instances(self, context, **kwargs):
768 instance_type = kwargs.get('instance_type', 'm1.small')751 max_count = int(kwargs.get('max_count', 1))
769 if instance_type not in INSTANCE_TYPES:752 instances = self.compute_api.create_instances(context,
770 raise exception.ApiError("Unknown instance type: %s",753 instance_types.get_by_type(kwargs.get('instance_type', None)),
771 instance_type)754 self.image_service,
772 # check quota755 kwargs['image_id'],
773 max_instances = int(kwargs.get('max_count', 1))756 self._get_network_topic(context),
774 min_instances = int(kwargs.get('min_count', max_instances))757 min_count=int(kwargs.get('min_count', max_count)),
775 num_instances = quota.allowed_instances(context,758 max_count=max_count,
776 max_instances,759 kernel_id=kwargs.get('kernel_id'),
777 instance_type)760 ramdisk_id=kwargs.get('ramdisk_id'),
778 if num_instances < min_instances:761 name=kwargs.get('display_name'),
779 logging.warn("Quota exceeeded for %s, tried to run %s instances",762 description=kwargs.get('display_description'),
780 context.project_id, min_instances)763 user_data=kwargs.get('user_data', ''),
781 raise QuotaError("Instance quota exceeded. You can only "764 key_name=kwargs.get('key_name'),
782 "run %s more instances of this type." %765 security_group=kwargs.get('security_group'),
783 num_instances, "InstanceLimitExceeded")766 generate_hostname=internal_id_to_ec2_id)
784 # make sure user can access the image767 return self._format_run_instances(context,
785 # vpn image is private so it doesn't show up on lists768 instances[0]['reservation_id'])
786 vpn = kwargs['image_id'] == FLAGS.vpn_image_id
787
788 if not vpn:
789 image = self.image_service.show(context, kwargs['image_id'])
790
791 # FIXME(ja): if image is vpn, this breaks
792 # get defaults from imagestore
793 image_id = image['imageId']
794 kernel_id = image.get('kernelId', FLAGS.default_kernel)
795 ramdisk_id = image.get('ramdiskId', FLAGS.default_ramdisk)
796
797 # API parameters overrides of defaults
798 kernel_id = kwargs.get('kernel_id', kernel_id)
799 ramdisk_id = kwargs.get('ramdisk_id', ramdisk_id)
800
801 # make sure we have access to kernel and ramdisk
802 self.image_service.show(context, kernel_id)
803 self.image_service.show(context, ramdisk_id)
804
805 logging.debug("Going to run %s instances...", num_instances)
806 launch_time = time.strftime('%Y-%m-%dT%H:%M:%SZ', time.gmtime())
807 key_data = None
808 if 'key_name' in kwargs:
809 key_pair_ref = db.key_pair_get(context,
810 context.user.id,
811 kwargs['key_name'])
812 key_data = key_pair_ref['public_key']
813
814 security_group_arg = kwargs.get('security_group', ["default"])
815 if not type(security_group_arg) is list:
816 security_group_arg = [security_group_arg]
817
818 security_groups = []
819 self._ensure_default_security_group(context)
820 for security_group_name in security_group_arg:
821 group = db.security_group_get_by_name(context,
822 context.project_id,
823 security_group_name)
824 security_groups.append(group['id'])
825
826 reservation_id = utils.generate_uid('r')
827 base_options = {}
828 base_options['state_description'] = 'scheduling'
829 base_options['image_id'] = image_id
830 base_options['kernel_id'] = kernel_id
831 base_options['ramdisk_id'] = ramdisk_id
832 base_options['reservation_id'] = reservation_id
833 base_options['key_data'] = key_data
834 base_options['key_name'] = kwargs.get('key_name', None)
835 base_options['user_id'] = context.user.id
836 base_options['project_id'] = context.project_id
837 base_options['user_data'] = kwargs.get('user_data', '')
838
839 base_options['display_name'] = kwargs.get('display_name')
840 base_options['display_description'] = kwargs.get('display_description')
841
842 type_data = INSTANCE_TYPES[instance_type]
843 base_options['instance_type'] = instance_type
844 base_options['memory_mb'] = type_data['memory_mb']
845 base_options['vcpus'] = type_data['vcpus']
846 base_options['local_gb'] = type_data['local_gb']
847 elevated = context.elevated()
848
849 for num in range(num_instances):
850
851 instance_ref = self.compute_manager.create_instance(context,
852 security_groups,
853 mac_address=utils.generate_mac(),
854 launch_index=num,
855 **base_options)
856 inst_id = instance_ref['id']
857
858 internal_id = instance_ref['internal_id']
859 ec2_id = internal_id_to_ec2_id(internal_id)
860
861 self.compute_manager.update_instance(context,
862 inst_id,
863 hostname=ec2_id)
864
865 # TODO(vish): This probably should be done in the scheduler
866 # or in compute as a call. The network should be
867 # allocated after the host is assigned and setup
868 # can happen at the same time.
869 address = self.network_manager.allocate_fixed_ip(context,
870 inst_id,
871 vpn)
872 network_topic = self._get_network_topic(context)
873 rpc.cast(elevated,
874 network_topic,
875 {"method": "setup_fixed_ip",
876 "args": {"address": address}})
877
878 rpc.cast(context,
879 FLAGS.scheduler_topic,
880 {"method": "run_instance",
881 "args": {"topic": FLAGS.compute_topic,
882 "instance_id": inst_id}})
883 logging.debug("Casting to scheduler for %s/%s's instance %s" %
884 (context.project.name, context.user.name, inst_id))
885 return self._format_run_instances(context, reservation_id)
886769
887 def terminate_instances(self, context, instance_id, **kwargs):770 def terminate_instances(self, context, instance_id, **kwargs):
888 """Terminate each instance in instance_id, which is a list of ec2 ids.771 """Terminate each instance in instance_id, which is a list of ec2 ids.
@@ -907,7 +790,7 @@
907 id_str)790 id_str)
908 continue791 continue
909 now = datetime.datetime.utcnow()792 now = datetime.datetime.utcnow()
910 self.compute_manager.update_instance(context,793 self.compute_api.update_instance(context,
911 instance_ref['id'],794 instance_ref['id'],
912 state_description='terminating',795 state_description='terminating',
913 state=0,796 state=0,
914797
=== modified file 'nova/api/openstack/servers.py'
--- nova/api/openstack/servers.py 2010-12-01 20:18:24 +0000
+++ nova/api/openstack/servers.py 2010-12-02 17:38:10 +0000
@@ -27,6 +27,7 @@
27from nova import context27from nova import context
28from nova.api import cloud28from nova.api import cloud
29from nova.api.openstack import faults29from nova.api.openstack import faults
30from nova.compute import api as compute_api
30from nova.compute import instance_types31from nova.compute import instance_types
31from nova.compute import power_state32from nova.compute import power_state
32import nova.api.openstack33import nova.api.openstack
@@ -95,7 +96,7 @@
95 db_driver = FLAGS.db_driver96 db_driver = FLAGS.db_driver
96 self.db_driver = utils.import_object(db_driver)97 self.db_driver = utils.import_object(db_driver)
97 self.network_manager = utils.import_object(FLAGS.network_manager)98 self.network_manager = utils.import_object(FLAGS.network_manager)
98 self.compute_manager = utils.import_object(FLAGS.compute_manager)99 self.compute_api = compute_api.ComputeAPI()
99 super(Controller, self).__init__()100 super(Controller, self).__init__()
100101
101 def index(self, req):102 def index(self, req):
@@ -140,22 +141,23 @@
140141
141 def create(self, req):142 def create(self, req):
142 """ Creates a new server for a given user """143 """ Creates a new server for a given user """
143
144 env = self._deserialize(req.body, req)144 env = self._deserialize(req.body, req)
145 if not env:145 if not env:
146 return faults.Fault(exc.HTTPUnprocessableEntity())146 return faults.Fault(exc.HTTPUnprocessableEntity())
147147
148 #try:
149 inst = self._build_server_instance(req, env)
150 #except Exception, e:
151 # return faults.Fault(exc.HTTPUnprocessableEntity())
152
153 user_id = req.environ['nova.context']['user']['id']148 user_id = req.environ['nova.context']['user']['id']
154 rpc.cast(context.RequestContext(user_id, user_id),149 ctxt = context.RequestContext(user_id, user_id)
155 FLAGS.compute_topic,150 key_pair = self.db_driver.key_pair_get_all_by_user(None, user_id)[0]
156 {"method": "run_instance",151 instances = self.compute_api.create_instances(ctxt,
157 "args": {"instance_id": inst['id']}})152 instance_types.get_by_flavor_id(env['server']['flavorId']),
158 return _entity_inst(inst)153 utils.import_object(FLAGS.image_service),
154 env['server']['imageId'],
155 self._get_network_topic(ctxt),
156 name=env['server']['name'],
157 description=env['server']['name'],
158 key_name=key_pair['name'],
159 key_data=key_pair['public_key'])
160 return _entity_inst(instances[0])
159161
160 def update(self, req, id):162 def update(self, req, id):
161 """ Updates the server name or password """163 """ Updates the server name or password """
@@ -191,78 +193,6 @@
191 return faults.Fault(exc.HTTPUnprocessableEntity())193 return faults.Fault(exc.HTTPUnprocessableEntity())
192 cloud.reboot(id)194 cloud.reboot(id)
193195
194 def _build_server_instance(self, req, env):
195 """Build instance data structure and save it to the data store."""
196 ltime = time.strftime('%Y-%m-%dT%H:%M:%SZ', time.gmtime())
197 inst = {}
198
199 user_id = req.environ['nova.context']['user']['id']
200 ctxt = context.RequestContext(user_id, user_id)
201
202 flavor_id = env['server']['flavorId']
203
204 instance_type, flavor = [(k, v) for k, v in
205 instance_types.INSTANCE_TYPES.iteritems()
206 if v['flavorid'] == flavor_id][0]
207
208 image_id = env['server']['imageId']
209 img_service = utils.import_object(FLAGS.image_service)
210
211 image = img_service.show(image_id)
212
213 if not image:
214 raise Exception("Image not found")
215
216 inst['image_id'] = image_id
217 inst['user_id'] = user_id
218 inst['launch_time'] = ltime
219 inst['mac_address'] = utils.generate_mac()
220 inst['project_id'] = user_id
221
222 inst['state_description'] = 'scheduling'
223 inst['kernel_id'] = image.get('kernelId', FLAGS.default_kernel)
224 inst['ramdisk_id'] = image.get('ramdiskId', FLAGS.default_ramdisk)
225 inst['reservation_id'] = utils.generate_uid('r')
226
227 inst['display_name'] = env['server']['name']
228 inst['display_description'] = env['server']['name']
229
230 #TODO(dietz) this may be ill advised
231 key_pair_ref = self.db_driver.key_pair_get_all_by_user(
232 None, user_id)[0]
233
234 inst['key_data'] = key_pair_ref['public_key']
235 inst['key_name'] = key_pair_ref['name']
236
237 #TODO(dietz) stolen from ec2 api, see TODO there
238 inst['security_group'] = 'default'
239
240 # Flavor related attributes
241 inst['instance_type'] = instance_type
242 inst['memory_mb'] = flavor['memory_mb']
243 inst['vcpus'] = flavor['vcpus']
244 inst['local_gb'] = flavor['local_gb']
245 inst['mac_address'] = utils.generate_mac()
246 inst['launch_index'] = 0
247
248 ref = self.compute_manager.create_instance(ctxt, **inst)
249 inst['id'] = ref['internal_id']
250
251 inst['hostname'] = str(ref['internal_id'])
252 self.compute_manager.update_instance(ctxt, inst['id'], **inst)
253
254 address = self.network_manager.allocate_fixed_ip(ctxt,
255 inst['id'])
256
257 # TODO(vish): This probably should be done in the scheduler
258 # network is setup when host is assigned
259 network_topic = self._get_network_topic(ctxt)
260 rpc.call(ctxt,
261 network_topic,
262 {"method": "setup_fixed_ip",
263 "args": {"address": address}})
264 return inst
265
266 def _get_network_topic(self, context):196 def _get_network_topic(self, context):
267 """Retrieves the network host for a project"""197 """Retrieves the network host for a project"""
268 network_ref = self.network_manager.get_network(context)198 network_ref = self.network_manager.get_network(context)
269199
=== added file 'nova/compute/api.py'
--- nova/compute/api.py 1970-01-01 00:00:00 +0000
+++ nova/compute/api.py 2010-12-02 17:38:10 +0000
@@ -0,0 +1,212 @@
1# vim: tabstop=4 shiftwidth=4 softtabstop=4
2
3# Copyright 2010 United States Government as represented by the
4# Administrator of the National Aeronautics and Space Administration.
5# All Rights Reserved.
6#
7# Licensed under the Apache License, Version 2.0 (the "License"); you may
8# not use this file except in compliance with the License. You may obtain
9# a copy of the License at
10#
11# http://www.apache.org/licenses/LICENSE-2.0
12#
13# Unless required by applicable law or agreed to in writing, software
14# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
15# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
16# License for the specific language governing permissions and limitations
17# under the License.
18
19"""
20Handles all API requests relating to instances (guest vms).
21"""
22
23import logging
24import time
25
26from nova import db
27from nova import exception
28from nova import flags
29from nova import quota
30from nova import rpc
31from nova import utils
32from nova.compute import instance_types
33from nova.db import base
34
35FLAGS = flags.FLAGS
36
37
38def generate_default_hostname(internal_id):
39 """Default function to generate a hostname given an instance reference."""
40 return str(internal_id)
41
42
43class ComputeAPI(base.Base):
44 """API for interacting with the compute manager."""
45
46 def __init__(self, **kwargs):
47 self.network_manager = utils.import_object(FLAGS.network_manager)
48 super(ComputeAPI, self).__init__(**kwargs)
49
50 # TODO(eday): network_topic arg should go away once we push network
51 # allocation into the scheduler or compute worker.
52 def create_instances(self, context, instance_type, image_service, image_id,
53 network_topic, min_count=1, max_count=1,
54 kernel_id=None, ramdisk_id=None, name='',
55 description='', user_data='', key_name=None,
56 key_data=None, security_group='default',
57 generate_hostname=generate_default_hostname):
58 """Create the number of instances requested if quote and
59 other arguments check out ok."""
60
61 num_instances = quota.allowed_instances(context, max_count,
62 instance_type)
63 if num_instances < min_count:
64 logging.warn("Quota exceeeded for %s, tried to run %s instances",
65 context.project_id, min_count)
66 raise quota.QuotaError("Instance quota exceeded. You can only "
67 "run %s more instances of this type." %
68 num_instances, "InstanceLimitExceeded")
69
70 is_vpn = image_id == FLAGS.vpn_image_id
71 if not is_vpn:
72 image = image_service.show(context, image_id)
73 if kernel_id is None:
74 kernel_id = image.get('kernelId', FLAGS.default_kernel)
75 if ramdisk_id is None:
76 ramdisk_id = image.get('ramdiskId', FLAGS.default_ramdisk)
77
78 # Make sure we have access to kernel and ramdisk
79 image_service.show(context, kernel_id)
80 image_service.show(context, ramdisk_id)
81
82 if security_group is None:
83 security_group = ['default']
84 if not type(security_group) is list:
85 security_group = [security_group]
86
87 security_groups = []
88 self.ensure_default_security_group(context)
89 for security_group_name in security_group:
90 group = db.security_group_get_by_name(context,
91 context.project_id,
92 security_group_name)
93 security_groups.append(group['id'])
94
95 if key_data is None and key_name:
96 key_pair = db.key_pair_get(context, context.user_id, key_name)
97 key_data = key_pair['public_key']
98
99 type_data = instance_types.INSTANCE_TYPES[instance_type]
100 base_options = {
101 'reservation_id': utils.generate_uid('r'),
102 'image_id': image_id,
103 'kernel_id': kernel_id,
104 'ramdisk_id': ramdisk_id,
105 'state_description': 'scheduling',
106 'user_id': context.user_id,
107 'project_id': context.project_id,
108 'launch_time': time.strftime('%Y-%m-%dT%H:%M:%SZ', time.gmtime()),
109 'instance_type': instance_type,
110 'memory_mb': type_data['memory_mb'],
111 'vcpus': type_data['vcpus'],
112 'local_gb': type_data['local_gb'],
113 'display_name': name,
114 'display_description': description,
115 'key_name': key_name,
116 'key_data': key_data}
117
118 elevated = context.elevated()
119 instances = []
120 logging.debug("Going to run %s instances...", num_instances)
121 for num in range(num_instances):
122 instance = dict(mac_address=utils.generate_mac(),
123 launch_index=num,
124 **base_options)
125 instance_ref = self.create_instance(context, security_groups,
126 **instance)
127 instance_id = instance_ref['id']
128 internal_id = instance_ref['internal_id']
129 hostname = generate_hostname(internal_id)
130 self.update_instance(context, instance_id, hostname=hostname)
131 instances.append(dict(id=instance_id, internal_id=internal_id,
132 hostname=hostname, **instance))
133
134 # TODO(vish): This probably should be done in the scheduler
135 # or in compute as a call. The network should be
136 # allocated after the host is assigned and setup
137 # can happen at the same time.
138 address = self.network_manager.allocate_fixed_ip(context,
139 instance_id,
140 is_vpn)
141 rpc.cast(elevated,
142 network_topic,
143 {"method": "setup_fixed_ip",
144 "args": {"address": address}})
145
146 logging.debug("Casting to scheduler for %s/%s's instance %s" %
147 (context.project_id, context.user_id, instance_id))
148 rpc.cast(context,
149 FLAGS.scheduler_topic,
150 {"method": "run_instance",
151 "args": {"topic": FLAGS.compute_topic,
152 "instance_id": instance_id}})
153
154 return instances
155
156 def ensure_default_security_group(self, context):
157 try:
158 db.security_group_get_by_name(context, context.project_id,
159 'default')
160 except exception.NotFound:
161 values = {'name': 'default',
162 'description': 'default',
163 'user_id': context.user_id,
164 'project_id': context.project_id}
165 group = db.security_group_create(context, values)
166
167 def create_instance(self, context, security_groups=None, **kwargs):
168 """Creates the instance in the datastore and returns the
169 new instance as a mapping
170
171 :param context: The security context
172 :param security_groups: list of security group ids to
173 attach to the instance
174 :param kwargs: All additional keyword args are treated
175 as data fields of the instance to be
176 created
177
178 :retval Returns a mapping of the instance information
179 that has just been created
180
181 """
182 instance_ref = self.db.instance_create(context, kwargs)
183 inst_id = instance_ref['id']
184 # Set sane defaults if not specified
185 if kwargs.get('display_name') is None:
186 display_name = "Server %s" % instance_ref['internal_id']
187 instance_ref['display_name'] = display_name
188 self.db.instance_update(context, inst_id,
189 {'display_name': display_name})
190
191 elevated = context.elevated()
192 if not security_groups:
193 security_groups = []
194 for security_group_id in security_groups:
195 self.db.instance_add_security_group(elevated,
196 inst_id,
197 security_group_id)
198 return instance_ref
199
200 def update_instance(self, context, instance_id, **kwargs):
201 """Updates the instance in the datastore.
202
203 :param context: The security context
204 :param instance_id: ID of the instance to update
205 :param kwargs: All additional keyword args are treated
206 as data fields of the instance to be
207 updated
208
209 :retval None
210
211 """
212 self.db.instance_update(context, instance_id, kwargs)
0213
=== modified file 'nova/compute/instance_types.py'
--- nova/compute/instance_types.py 2010-10-18 22:58:42 +0000
+++ nova/compute/instance_types.py 2010-12-02 17:38:10 +0000
@@ -21,9 +21,29 @@
21The built-in instance properties.21The built-in instance properties.
22"""22"""
2323
24from nova import flags
25
26FLAGS = flags.FLAGS
24INSTANCE_TYPES = {27INSTANCE_TYPES = {
25 'm1.tiny': dict(memory_mb=512, vcpus=1, local_gb=0, flavorid=1),28 'm1.tiny': dict(memory_mb=512, vcpus=1, local_gb=0, flavorid=1),
26 'm1.small': dict(memory_mb=2048, vcpus=1, local_gb=20, flavorid=2),29 'm1.small': dict(memory_mb=2048, vcpus=1, local_gb=20, flavorid=2),
27 'm1.medium': dict(memory_mb=4096, vcpus=2, local_gb=40, flavorid=3),30 'm1.medium': dict(memory_mb=4096, vcpus=2, local_gb=40, flavorid=3),
28 'm1.large': dict(memory_mb=8192, vcpus=4, local_gb=80, flavorid=4),31 'm1.large': dict(memory_mb=8192, vcpus=4, local_gb=80, flavorid=4),
29 'm1.xlarge': dict(memory_mb=16384, vcpus=8, local_gb=160, flavorid=5)}32 'm1.xlarge': dict(memory_mb=16384, vcpus=8, local_gb=160, flavorid=5)}
33
34
35def get_by_type(instance_type):
36 """Build instance data structure and save it to the data store."""
37 if instance_type is None:
38 return FLAGS.default_instance_type
39 if instance_type not in INSTANCE_TYPES:
40 raise exception.ApiError("Unknown instance type: %s",
41 instance_type)
42 return instance_type
43
44
45def get_by_flavor_id(flavor_id):
46 for instance_type, details in INSTANCE_TYPES.iteritems():
47 if details['flavorid'] == flavor_id:
48 return instance_type
49 return FLAGS.default_instance_type
3050
=== modified file 'nova/compute/manager.py'
--- nova/compute/manager.py 2010-12-02 16:08:56 +0000
+++ nova/compute/manager.py 2010-12-02 17:38:10 +0000
@@ -39,13 +39,13 @@
3939
40from twisted.internet import defer40from twisted.internet import defer
4141
42from nova import db
42from nova import exception43from nova import exception
43from nova import flags44from nova import flags
44from nova import manager45from nova import manager
45from nova import utils46from nova import utils
46from nova.compute import power_state47from nova.compute import power_state
4748
48
49FLAGS = flags.FLAGS49FLAGS = flags.FLAGS
50flags.DEFINE_string('instances_path', '$state_path/instances',50flags.DEFINE_string('instances_path', '$state_path/instances',
51 'where instances are stored on disk')51 'where instances are stored on disk')
@@ -84,53 +84,6 @@
84 """This call passes stright through to the virtualization driver."""84 """This call passes stright through to the virtualization driver."""
85 yield self.driver.refresh_security_group(security_group_id)85 yield self.driver.refresh_security_group(security_group_id)
8686
87 def create_instance(self, context, security_groups=None, **kwargs):
88 """Creates the instance in the datastore and returns the
89 new instance as a mapping
90
91 :param context: The security context
92 :param security_groups: list of security group ids to
93 attach to the instance
94 :param kwargs: All additional keyword args are treated
95 as data fields of the instance to be
96 created
97
98 :retval Returns a mapping of the instance information
99 that has just been created
100
101 """
102 instance_ref = self.db.instance_create(context, kwargs)
103 inst_id = instance_ref['id']
104 # Set sane defaults if not specified
105 if kwargs.get('display_name') is None:
106 display_name = "Server %s" % instance_ref['internal_id']
107 instance_ref['display_name'] = display_name
108 self.db.instance_update(context, inst_id,
109 {'display_name': display_name})
110
111 elevated = context.elevated()
112 if not security_groups:
113 security_groups = []
114 for security_group_id in security_groups:
115 self.db.instance_add_security_group(elevated,
116 inst_id,
117 security_group_id)
118 return instance_ref
119
120 def update_instance(self, context, instance_id, **kwargs):
121 """Updates the instance in the datastore.
122
123 :param context: The security context
124 :param instance_id: ID of the instance to update
125 :param kwargs: All additional keyword args are treated
126 as data fields of the instance to be
127 updated
128
129 :retval None
130
131 """
132 self.db.instance_update(context, instance_id, kwargs)
133
134 @defer.inlineCallbacks87 @defer.inlineCallbacks
135 @exception.wrap_exception88 @exception.wrap_exception
136 def run_instance(self, context, instance_id, **_kwargs):89 def run_instance(self, context, instance_id, **_kwargs):
13790
=== added file 'nova/db/base.py'
--- nova/db/base.py 1970-01-01 00:00:00 +0000
+++ nova/db/base.py 2010-12-02 17:38:10 +0000
@@ -0,0 +1,36 @@
1# vim: tabstop=4 shiftwidth=4 softtabstop=4
2
3# Copyright 2010 United States Government as represented by the
4# Administrator of the National Aeronautics and Space Administration.
5# All Rights Reserved.
6#
7# Licensed under the Apache License, Version 2.0 (the "License"); you may
8# not use this file except in compliance with the License. You may obtain
9# a copy of the License at
10#
11# http://www.apache.org/licenses/LICENSE-2.0
12#
13# Unless required by applicable law or agreed to in writing, software
14# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
15# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
16# License for the specific language governing permissions and limitations
17# under the License.
18
19"""
20Base class for classes that need modular database access.
21"""
22
23from nova import utils
24from nova import flags
25
26FLAGS = flags.FLAGS
27flags.DEFINE_string('db_driver', 'nova.db.api',
28 'driver to use for database access')
29
30
31class Base(object):
32 """DB driver is injected in the init method"""
33 def __init__(self, db_driver=None):
34 if not db_driver:
35 db_driver = FLAGS.db_driver
36 self.db = utils.import_object(db_driver) # pylint: disable-msg=C0103
037
=== modified file 'nova/manager.py'
--- nova/manager.py 2010-11-07 19:51:40 +0000
+++ nova/manager.py 2010-12-02 17:38:10 +0000
@@ -53,23 +53,19 @@
5353
54from nova import utils54from nova import utils
55from nova import flags55from nova import flags
56from nova.db import base
5657
57from twisted.internet import defer58from twisted.internet import defer
5859
59FLAGS = flags.FLAGS60FLAGS = flags.FLAGS
60flags.DEFINE_string('db_driver', 'nova.db.api',61
61 'driver to use for volume creation')62
6263class Manager(base.Base):
63
64class Manager(object):
65 """DB driver is injected in the init method"""
66 def __init__(self, host=None, db_driver=None):64 def __init__(self, host=None, db_driver=None):
67 if not host:65 if not host:
68 host = FLAGS.host66 host = FLAGS.host
69 self.host = host67 self.host = host
70 if not db_driver:68 super(Manager, self).__init__(db_driver)
71 db_driver = FLAGS.db_driver
72 self.db = utils.import_object(db_driver) # pylint: disable-msg=C0103
7369
74 @defer.inlineCallbacks70 @defer.inlineCallbacks
75 def periodic_tasks(self, context=None):71 def periodic_tasks(self, context=None):
7672
=== modified file 'nova/quota.py'
--- nova/quota.py 2010-10-21 18:49:51 +0000
+++ nova/quota.py 2010-12-02 17:38:10 +0000
@@ -94,3 +94,8 @@
94 quota = get_quota(context, project_id)94 quota = get_quota(context, project_id)
95 allowed_floating_ips = quota['floating_ips'] - used_floating_ips95 allowed_floating_ips = quota['floating_ips'] - used_floating_ips
96 return min(num_floating_ips, allowed_floating_ips)96 return min(num_floating_ips, allowed_floating_ips)
97
98
99class QuotaError(exception.ApiError):
100 """Quota Exceeeded"""
101 pass
97102
=== modified file 'nova/tests/api/openstack/fakes.py'
--- nova/tests/api/openstack/fakes.py 2010-11-30 18:52:46 +0000
+++ nova/tests/api/openstack/fakes.py 2010-12-02 17:38:10 +0000
@@ -72,7 +72,7 @@
7272
7373
74def stub_out_image_service(stubs):74def stub_out_image_service(stubs):
75 def fake_image_show(meh, id):75 def fake_image_show(meh, context, id):
76 return dict(kernelId=1, ramdiskId=1)76 return dict(kernelId=1, ramdiskId=1)
7777
78 stubs.Set(nova.image.local.LocalImageService, 'show', fake_image_show)78 stubs.Set(nova.image.local.LocalImageService, 'show', fake_image_show)
7979
=== modified file 'nova/tests/api/openstack/test_servers.py'
--- nova/tests/api/openstack/test_servers.py 2010-12-01 20:18:24 +0000
+++ nova/tests/api/openstack/test_servers.py 2010-12-02 17:38:10 +0000
@@ -43,6 +43,10 @@
43 return [stub_instance(i, user_id) for i in xrange(5)]43 return [stub_instance(i, user_id) for i in xrange(5)]
4444
4545
46def return_security_group(context, instance_id, security_group_id):
47 pass
48
49
46def stub_instance(id, user_id=1):50def stub_instance(id, user_id=1):
47 return Instance(id=id, state=0, image_id=10, display_name='server%s' % id,51 return Instance(id=id, state=0, image_id=10, display_name='server%s' % id,
48 user_id=user_id)52 user_id=user_id)
@@ -63,6 +67,8 @@
63 return_server)67 return_server)
64 self.stubs.Set(nova.db.api, 'instance_get_all_by_user',68 self.stubs.Set(nova.db.api, 'instance_get_all_by_user',
65 return_servers)69 return_servers)
70 self.stubs.Set(nova.db.api, 'instance_add_security_group',
71 return_security_group)
6672
67 def tearDown(self):73 def tearDown(self):
68 self.stubs.UnsetAll()74 self.stubs.UnsetAll()
6975
=== modified file 'nova/tests/compute_unittest.py'
--- nova/tests/compute_unittest.py 2010-12-02 16:08:56 +0000
+++ nova/tests/compute_unittest.py 2010-12-02 17:38:10 +0000
@@ -31,6 +31,7 @@
31from nova import test31from nova import test
32from nova import utils32from nova import utils
33from nova.auth import manager33from nova.auth import manager
34from nova.compute import api as compute_api
3435
35FLAGS = flags.FLAGS36FLAGS = flags.FLAGS
3637
@@ -43,6 +44,7 @@
43 self.flags(connection_type='fake',44 self.flags(connection_type='fake',
44 network_manager='nova.network.manager.FlatManager')45 network_manager='nova.network.manager.FlatManager')
45 self.compute = utils.import_object(FLAGS.compute_manager)46 self.compute = utils.import_object(FLAGS.compute_manager)
47 self.compute_api = compute_api.ComputeAPI()
46 self.manager = manager.AuthManager()48 self.manager = manager.AuthManager()
47 self.user = self.manager.create_user('fake', 'fake', 'fake')49 self.user = self.manager.create_user('fake', 'fake', 'fake')
48 self.project = self.manager.create_project('fake', 'fake', 'fake')50 self.project = self.manager.create_project('fake', 'fake', 'fake')
@@ -70,7 +72,8 @@
70 """Verify that an instance cannot be created without a display_name."""72 """Verify that an instance cannot be created without a display_name."""
71 cases = [dict(), dict(display_name=None)]73 cases = [dict(), dict(display_name=None)]
72 for instance in cases:74 for instance in cases:
73 ref = self.compute.create_instance(self.context, None, **instance)75 ref = self.compute_api.create_instance(self.context, None,
76 **instance)
74 try:77 try:
75 self.assertNotEqual(ref.display_name, None)78 self.assertNotEqual(ref.display_name, None)
76 finally:79 finally:
@@ -86,9 +89,9 @@
86 'user_id': self.user.id,89 'user_id': self.user.id,
87 'project_id': self.project.id}90 'project_id': self.project.id}
88 group = db.security_group_create(self.context, values)91 group = db.security_group_create(self.context, values)
89 ref = self.compute.create_instance(self.context,92 ref = self.compute_api.create_instance(self.context,
90 security_groups=[group['id']],93 security_groups=[group['id']],
91 **inst)94 **inst)
92 # reload to get groups95 # reload to get groups
93 instance_ref = db.instance_get(self.context, ref['id'])96 instance_ref = db.instance_get(self.context, ref['id'])
94 try:97 try:
9598
=== modified file 'nova/tests/quota_unittest.py'
--- nova/tests/quota_unittest.py 2010-11-17 21:23:12 +0000
+++ nova/tests/quota_unittest.py 2010-12-02 17:38:10 +0000
@@ -94,11 +94,12 @@
94 for i in range(FLAGS.quota_instances):94 for i in range(FLAGS.quota_instances):
95 instance_id = self._create_instance()95 instance_id = self._create_instance()
96 instance_ids.append(instance_id)96 instance_ids.append(instance_id)
97 self.assertRaises(cloud.QuotaError, self.cloud.run_instances,97 self.assertRaises(quota.QuotaError, self.cloud.run_instances,
98 self.context,98 self.context,
99 min_count=1,99 min_count=1,
100 max_count=1,100 max_count=1,
101 instance_type='m1.small')101 instance_type='m1.small',
102 image_id='fake')
102 for instance_id in instance_ids:103 for instance_id in instance_ids:
103 db.instance_destroy(self.context, instance_id)104 db.instance_destroy(self.context, instance_id)
104105
@@ -106,11 +107,12 @@
106 instance_ids = []107 instance_ids = []
107 instance_id = self._create_instance(cores=4)108 instance_id = self._create_instance(cores=4)
108 instance_ids.append(instance_id)109 instance_ids.append(instance_id)
109 self.assertRaises(cloud.QuotaError, self.cloud.run_instances,110 self.assertRaises(quota.QuotaError, self.cloud.run_instances,
110 self.context,111 self.context,
111 min_count=1,112 min_count=1,
112 max_count=1,113 max_count=1,
113 instance_type='m1.small')114 instance_type='m1.small',
115 image_id='fake')
114 for instance_id in instance_ids:116 for instance_id in instance_ids:
115 db.instance_destroy(self.context, instance_id)117 db.instance_destroy(self.context, instance_id)
116118
@@ -119,7 +121,7 @@
119 for i in range(FLAGS.quota_volumes):121 for i in range(FLAGS.quota_volumes):
120 volume_id = self._create_volume()122 volume_id = self._create_volume()
121 volume_ids.append(volume_id)123 volume_ids.append(volume_id)
122 self.assertRaises(cloud.QuotaError, self.cloud.create_volume,124 self.assertRaises(quota.QuotaError, self.cloud.create_volume,
123 self.context,125 self.context,
124 size=10)126 size=10)
125 for volume_id in volume_ids:127 for volume_id in volume_ids:
@@ -129,7 +131,7 @@
129 volume_ids = []131 volume_ids = []
130 volume_id = self._create_volume(size=20)132 volume_id = self._create_volume(size=20)
131 volume_ids.append(volume_id)133 volume_ids.append(volume_id)
132 self.assertRaises(cloud.QuotaError,134 self.assertRaises(quota.QuotaError,
133 self.cloud.create_volume,135 self.cloud.create_volume,
134 self.context,136 self.context,
135 size=10)137 size=10)
@@ -146,6 +148,6 @@
146 # make an rpc.call, the test just finishes with OK. It148 # make an rpc.call, the test just finishes with OK. It
147 # appears to be something in the magic inline callbacks149 # appears to be something in the magic inline callbacks
148 # that is breaking.150 # that is breaking.
149 self.assertRaises(cloud.QuotaError, self.cloud.allocate_address,151 self.assertRaises(quota.QuotaError, self.cloud.allocate_address,
150 self.context)152 self.context)
151 db.floating_ip_destroy(context.get_admin_context(), address)153 db.floating_ip_destroy(context.get_admin_context(), address)