Merge lp:~eday/nova/compute-abstraction into lp:~hudson-openstack/nova/trunk
- compute-abstraction
- Merge into trunk
Status: | Merged |
---|---|
Approved by: | Vish Ishaya |
Approved revision: | 427 |
Merged at revision: | 437 |
Proposed branch: | lp:~eday/nova/compute-abstraction |
Merge into: | lp:~hudson-openstack/nova/trunk |
Diff against target: |
953 lines (+347/-301) 12 files modified
nova/api/ec2/cloud.py (+31/-148) nova/api/openstack/servers.py (+14/-84) nova/compute/api.py (+212/-0) nova/compute/instance_types.py (+20/-0) nova/compute/manager.py (+1/-48) nova/db/base.py (+36/-0) nova/manager.py (+5/-9) nova/quota.py (+5/-0) nova/tests/api/openstack/fakes.py (+1/-1) nova/tests/api/openstack/test_servers.py (+6/-0) nova/tests/compute_unittest.py (+7/-4) nova/tests/quota_unittest.py (+9/-7) |
To merge this branch: | bzr merge lp:~eday/nova/compute-abstraction |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Michael Gundlach (community) | Approve | ||
Vish Ishaya (community) | Approve | ||
Soren Hansen (community) | Approve | ||
Review via email: mp+41805@code.launchpad.net |
Commit message
Description of the change
Consolidated the start instance logic in the two API classes into a single method. This also cleans up a number of small discrepencies between the two.
Soren Hansen (soren) wrote : | # |
2010/11/25 Vish Ishaya <email address hidden>:
> Review: Approve
> Questions on irc were addressed by 424. LGTM. We definitely need to clean up cloud unittests.
It would probably be useful to post the relevant IRC conversation
here. I'd like to know what you talked about, but I'm not really
interested in digging through hours worth of irc logs. :)
--
Soren Hansen
Ubuntu Developer http://
OpenStack Developer http://
Soren Hansen (soren) wrote : | # |
2010/11/24 Eric Day <email address hidden>:
> === modified file 'nova/api/
> --- nova/api/
> +++ nova/api/
> @@ -39,7 +39,7 @@
> from nova import quota
> from nova import rpc
> from nova import utils
> -from nova.compute.
> +from nova.compute import instance_types
This needs to move out of code anyway. Filed bug #681411.
> @@ -260,7 +255,7 @@
> return True
>
> def describe_
> - self._ensure_
> + self.compute_
> if context.
> groups = db.security_
> else:
I understand the motivation to consolidate this code in the compute
manager. I just think that instances launched through the EC2 ÁPI should
land in one security group by default and instances launched through the
OpenStack API should land in another by default. EC2 restricts all
access to instances by default, while Rackspace has traditionally left
them open, leaving it to the owner of the instance to shield it off.
I filed bug #681416 to track this.
> @@ -505,9 +500,8 @@
> if quota.allowed_
> logging.warn("Quota exceeeded for %s, tried to create %sG volume",
> context.project_id, size)
> - raise QuotaError("Volume quota exceeded. You cannot "
> - "create a volume of size %s" %
> - size)
> + raise quota.QuotaErro
> + "create a volume of size %s" % size)
We should include a unit, not just the number. Perhaps the user thinks
he's creating a 1000 MB volume, but we're actually blocking him from
creating a 1000 GB volume.
Filed bug #681417.
> === modified file 'nova/compute/
> --- nova/compute/
> +++ nova/compute/
> @@ -36,13 +36,18 @@
>
> import logging +import time
>
> from twisted.internet import defer
>
> +from nova import db
> from nova import exception
> from nova import flags
> from nova import manager
> +from nova import quota
> +from nova import rpc
> from nova import utils
> +from nova.compute import instance_types
> from nova.compute import power_state
>
>
> @@ -53,6 +58,11 @@
> 'Driver to use for volume creation')
>
>
> +def generate_
> + """Default function to generate a hostname given an instance reference."""
> + return str(internal_id)
> +
> +
> class ComputeManager(
> """Manages the running instances from creation to destruction."""
>
> @@ -84,6 +94,126 @@
> """This call passes stright through to the virtualization driver."""
> yield self.driver.
>
> + # TODO(eday): network_topic arg should go away once we push network
> + # allocation into the scheduler or ...
Eric Day (eday) wrote : | # |
I agree on all the points regarding security groups and fixing other things you've filed bugs for. I was trying to just shuffle code around so we only need to edit things in one place, and not actually change the logic.
As far as putting the code in compute manager, I started by creating nova.compute.api, but this felt weird too. It actually made sense to me in the end to put them right next to each other, just knowing they will be on opposite ends of the worker. Perhaps we could just split the class and have both in the manager.py file. I'm open.
Soren Hansen (soren) wrote : | # |
Separate classes in the same file sounds good. It also lets us to things in the individual __init__ methods without worrying it might get run in the wrong context.
The bugs I filed and mentioned weren't meant as comments on your code. It was just stuff I stumbled upon while reading your diff. I guess I should have left it out to avoid the confusion. Just ignore it. :)
Vish Ishaya (vishvananda) wrote : | # |
The stuff in irc was simply noticing that exception handling on image_get wasn't handled consistently. Giving the workers an api seems reasonable to me. I think it is easier conceptually than having the same class work both locally and remote.
How do you see this working for the volume? Code from volume_manager runs on the api host (create the db record), the compute host (discover the volume), and the volume host (create and export the volume). Do we have three separate classes? Or would VolumeAPI encompass all of the functions that are called by other workers (the api host and compute host code)?
Regardless, we need move the other workers code into this format ASAP.
Soren Hansen (soren) wrote : | # |
This is great stuff! Thanks!
OpenStack Infra (hudson-openstack) wrote : | # |
Attempt to merge into lp:nova failed due to conflicts:
text conflict in nova/api/
text conflict in nova/compute/
OpenStack Infra (hudson-openstack) wrote : | # |
There are additional revisions which have not been approved in review. Please seek review and approval of these new revisions.
OpenStack Infra (hudson-openstack) wrote : | # |
Attempt to merge into lp:nova failed due to conflicts:
text conflict in nova/compute/
Michael Gundlach (gundlach) wrote : | # |
465:s/quote/quota/ and shouldn't that be in multiline triple-quote form?
763:isn't it a pep8 violation to not have blank lines b/w class and comment? also, i'm not sure i like having an extra base class just to set 'self.db', versus making a helper function in the db module that can be called explicitly to set self.db. but that's bikeshedding so i'm ok with how it is now too.
lgtm.
Preview Diff
1 | === modified file 'nova/api/ec2/cloud.py' |
2 | --- nova/api/ec2/cloud.py 2010-11-30 08:19:32 +0000 |
3 | +++ nova/api/ec2/cloud.py 2010-12-02 17:38:10 +0000 |
4 | @@ -39,7 +39,8 @@ |
5 | from nova import quota |
6 | from nova import rpc |
7 | from nova import utils |
8 | -from nova.compute.instance_types import INSTANCE_TYPES |
9 | +from nova.compute import api as compute_api |
10 | +from nova.compute import instance_types |
11 | from nova.api import cloud |
12 | from nova.image.s3 import S3ImageService |
13 | |
14 | @@ -50,11 +51,6 @@ |
15 | InvalidInputException = exception.InvalidInputException |
16 | |
17 | |
18 | -class QuotaError(exception.ApiError): |
19 | - """Quota Exceeeded""" |
20 | - pass |
21 | - |
22 | - |
23 | def _gen_key(context, user_id, key_name): |
24 | """Generate a key |
25 | |
26 | @@ -99,7 +95,7 @@ |
27 | """ |
28 | def __init__(self): |
29 | self.network_manager = utils.import_object(FLAGS.network_manager) |
30 | - self.compute_manager = utils.import_object(FLAGS.compute_manager) |
31 | + self.compute_api = compute_api.ComputeAPI() |
32 | self.image_service = S3ImageService() |
33 | self.setup() |
34 | |
35 | @@ -127,7 +123,7 @@ |
36 | for instance in db.instance_get_all_by_project(context, project_id): |
37 | if instance['fixed_ip']: |
38 | line = '%s slots=%d' % (instance['fixed_ip']['address'], |
39 | - INSTANCE_TYPES[instance['instance_type']]['vcpus']) |
40 | + instance['vcpus']) |
41 | key = str(instance['key_name']) |
42 | if key in result: |
43 | result[key].append(line) |
44 | @@ -260,7 +256,7 @@ |
45 | return True |
46 | |
47 | def describe_security_groups(self, context, group_name=None, **kwargs): |
48 | - self._ensure_default_security_group(context) |
49 | + self.compute_api.ensure_default_security_group(context) |
50 | if context.user.is_admin(): |
51 | groups = db.security_group_get_all(context) |
52 | else: |
53 | @@ -358,7 +354,7 @@ |
54 | return False |
55 | |
56 | def revoke_security_group_ingress(self, context, group_name, **kwargs): |
57 | - self._ensure_default_security_group(context) |
58 | + self.compute_api.ensure_default_security_group(context) |
59 | security_group = db.security_group_get_by_name(context, |
60 | context.project_id, |
61 | group_name) |
62 | @@ -383,7 +379,7 @@ |
63 | # for these operations, so support for newer API versions |
64 | # is sketchy. |
65 | def authorize_security_group_ingress(self, context, group_name, **kwargs): |
66 | - self._ensure_default_security_group(context) |
67 | + self.compute_api.ensure_default_security_group(context) |
68 | security_group = db.security_group_get_by_name(context, |
69 | context.project_id, |
70 | group_name) |
71 | @@ -419,7 +415,7 @@ |
72 | return source_project_id |
73 | |
74 | def create_security_group(self, context, group_name, group_description): |
75 | - self._ensure_default_security_group(context) |
76 | + self.compute_api.ensure_default_security_group(context) |
77 | if db.security_group_exists(context, context.project_id, group_name): |
78 | raise exception.ApiError('group %s already exists' % group_name) |
79 | |
80 | @@ -505,9 +501,8 @@ |
81 | if quota.allowed_volumes(context, 1, size) < 1: |
82 | logging.warn("Quota exceeeded for %s, tried to create %sG volume", |
83 | context.project_id, size) |
84 | - raise QuotaError("Volume quota exceeded. You cannot " |
85 | - "create a volume of size %s" % |
86 | - size) |
87 | + raise quota.QuotaError("Volume quota exceeded. You cannot " |
88 | + "create a volume of size %s" % size) |
89 | vol = {} |
90 | vol['size'] = size |
91 | vol['user_id'] = context.user.id |
92 | @@ -699,8 +694,8 @@ |
93 | if quota.allowed_floating_ips(context, 1) < 1: |
94 | logging.warn("Quota exceeeded for %s, tried to allocate address", |
95 | context.project_id) |
96 | - raise QuotaError("Address quota exceeded. You cannot " |
97 | - "allocate any more addresses") |
98 | + raise quota.QuotaError("Address quota exceeded. You cannot " |
99 | + "allocate any more addresses") |
100 | network_topic = self._get_network_topic(context) |
101 | public_ip = rpc.call(context, |
102 | network_topic, |
103 | @@ -752,137 +747,25 @@ |
104 | "args": {"network_id": network_ref['id']}}) |
105 | return db.queue_get_for(context, FLAGS.network_topic, host) |
106 | |
107 | - def _ensure_default_security_group(self, context): |
108 | - try: |
109 | - db.security_group_get_by_name(context, |
110 | - context.project_id, |
111 | - 'default') |
112 | - except exception.NotFound: |
113 | - values = {'name': 'default', |
114 | - 'description': 'default', |
115 | - 'user_id': context.user.id, |
116 | - 'project_id': context.project_id} |
117 | - group = db.security_group_create(context, values) |
118 | - |
119 | def run_instances(self, context, **kwargs): |
120 | - instance_type = kwargs.get('instance_type', 'm1.small') |
121 | - if instance_type not in INSTANCE_TYPES: |
122 | - raise exception.ApiError("Unknown instance type: %s", |
123 | - instance_type) |
124 | - # check quota |
125 | - max_instances = int(kwargs.get('max_count', 1)) |
126 | - min_instances = int(kwargs.get('min_count', max_instances)) |
127 | - num_instances = quota.allowed_instances(context, |
128 | - max_instances, |
129 | - instance_type) |
130 | - if num_instances < min_instances: |
131 | - logging.warn("Quota exceeeded for %s, tried to run %s instances", |
132 | - context.project_id, min_instances) |
133 | - raise QuotaError("Instance quota exceeded. You can only " |
134 | - "run %s more instances of this type." % |
135 | - num_instances, "InstanceLimitExceeded") |
136 | - # make sure user can access the image |
137 | - # vpn image is private so it doesn't show up on lists |
138 | - vpn = kwargs['image_id'] == FLAGS.vpn_image_id |
139 | - |
140 | - if not vpn: |
141 | - image = self.image_service.show(context, kwargs['image_id']) |
142 | - |
143 | - # FIXME(ja): if image is vpn, this breaks |
144 | - # get defaults from imagestore |
145 | - image_id = image['imageId'] |
146 | - kernel_id = image.get('kernelId', FLAGS.default_kernel) |
147 | - ramdisk_id = image.get('ramdiskId', FLAGS.default_ramdisk) |
148 | - |
149 | - # API parameters overrides of defaults |
150 | - kernel_id = kwargs.get('kernel_id', kernel_id) |
151 | - ramdisk_id = kwargs.get('ramdisk_id', ramdisk_id) |
152 | - |
153 | - # make sure we have access to kernel and ramdisk |
154 | - self.image_service.show(context, kernel_id) |
155 | - self.image_service.show(context, ramdisk_id) |
156 | - |
157 | - logging.debug("Going to run %s instances...", num_instances) |
158 | - launch_time = time.strftime('%Y-%m-%dT%H:%M:%SZ', time.gmtime()) |
159 | - key_data = None |
160 | - if 'key_name' in kwargs: |
161 | - key_pair_ref = db.key_pair_get(context, |
162 | - context.user.id, |
163 | - kwargs['key_name']) |
164 | - key_data = key_pair_ref['public_key'] |
165 | - |
166 | - security_group_arg = kwargs.get('security_group', ["default"]) |
167 | - if not type(security_group_arg) is list: |
168 | - security_group_arg = [security_group_arg] |
169 | - |
170 | - security_groups = [] |
171 | - self._ensure_default_security_group(context) |
172 | - for security_group_name in security_group_arg: |
173 | - group = db.security_group_get_by_name(context, |
174 | - context.project_id, |
175 | - security_group_name) |
176 | - security_groups.append(group['id']) |
177 | - |
178 | - reservation_id = utils.generate_uid('r') |
179 | - base_options = {} |
180 | - base_options['state_description'] = 'scheduling' |
181 | - base_options['image_id'] = image_id |
182 | - base_options['kernel_id'] = kernel_id |
183 | - base_options['ramdisk_id'] = ramdisk_id |
184 | - base_options['reservation_id'] = reservation_id |
185 | - base_options['key_data'] = key_data |
186 | - base_options['key_name'] = kwargs.get('key_name', None) |
187 | - base_options['user_id'] = context.user.id |
188 | - base_options['project_id'] = context.project_id |
189 | - base_options['user_data'] = kwargs.get('user_data', '') |
190 | - |
191 | - base_options['display_name'] = kwargs.get('display_name') |
192 | - base_options['display_description'] = kwargs.get('display_description') |
193 | - |
194 | - type_data = INSTANCE_TYPES[instance_type] |
195 | - base_options['instance_type'] = instance_type |
196 | - base_options['memory_mb'] = type_data['memory_mb'] |
197 | - base_options['vcpus'] = type_data['vcpus'] |
198 | - base_options['local_gb'] = type_data['local_gb'] |
199 | - elevated = context.elevated() |
200 | - |
201 | - for num in range(num_instances): |
202 | - |
203 | - instance_ref = self.compute_manager.create_instance(context, |
204 | - security_groups, |
205 | - mac_address=utils.generate_mac(), |
206 | - launch_index=num, |
207 | - **base_options) |
208 | - inst_id = instance_ref['id'] |
209 | - |
210 | - internal_id = instance_ref['internal_id'] |
211 | - ec2_id = internal_id_to_ec2_id(internal_id) |
212 | - |
213 | - self.compute_manager.update_instance(context, |
214 | - inst_id, |
215 | - hostname=ec2_id) |
216 | - |
217 | - # TODO(vish): This probably should be done in the scheduler |
218 | - # or in compute as a call. The network should be |
219 | - # allocated after the host is assigned and setup |
220 | - # can happen at the same time. |
221 | - address = self.network_manager.allocate_fixed_ip(context, |
222 | - inst_id, |
223 | - vpn) |
224 | - network_topic = self._get_network_topic(context) |
225 | - rpc.cast(elevated, |
226 | - network_topic, |
227 | - {"method": "setup_fixed_ip", |
228 | - "args": {"address": address}}) |
229 | - |
230 | - rpc.cast(context, |
231 | - FLAGS.scheduler_topic, |
232 | - {"method": "run_instance", |
233 | - "args": {"topic": FLAGS.compute_topic, |
234 | - "instance_id": inst_id}}) |
235 | - logging.debug("Casting to scheduler for %s/%s's instance %s" % |
236 | - (context.project.name, context.user.name, inst_id)) |
237 | - return self._format_run_instances(context, reservation_id) |
238 | + max_count = int(kwargs.get('max_count', 1)) |
239 | + instances = self.compute_api.create_instances(context, |
240 | + instance_types.get_by_type(kwargs.get('instance_type', None)), |
241 | + self.image_service, |
242 | + kwargs['image_id'], |
243 | + self._get_network_topic(context), |
244 | + min_count=int(kwargs.get('min_count', max_count)), |
245 | + max_count=max_count, |
246 | + kernel_id=kwargs.get('kernel_id'), |
247 | + ramdisk_id=kwargs.get('ramdisk_id'), |
248 | + name=kwargs.get('display_name'), |
249 | + description=kwargs.get('display_description'), |
250 | + user_data=kwargs.get('user_data', ''), |
251 | + key_name=kwargs.get('key_name'), |
252 | + security_group=kwargs.get('security_group'), |
253 | + generate_hostname=internal_id_to_ec2_id) |
254 | + return self._format_run_instances(context, |
255 | + instances[0]['reservation_id']) |
256 | |
257 | def terminate_instances(self, context, instance_id, **kwargs): |
258 | """Terminate each instance in instance_id, which is a list of ec2 ids. |
259 | @@ -907,7 +790,7 @@ |
260 | id_str) |
261 | continue |
262 | now = datetime.datetime.utcnow() |
263 | - self.compute_manager.update_instance(context, |
264 | + self.compute_api.update_instance(context, |
265 | instance_ref['id'], |
266 | state_description='terminating', |
267 | state=0, |
268 | |
269 | === modified file 'nova/api/openstack/servers.py' |
270 | --- nova/api/openstack/servers.py 2010-12-01 20:18:24 +0000 |
271 | +++ nova/api/openstack/servers.py 2010-12-02 17:38:10 +0000 |
272 | @@ -27,6 +27,7 @@ |
273 | from nova import context |
274 | from nova.api import cloud |
275 | from nova.api.openstack import faults |
276 | +from nova.compute import api as compute_api |
277 | from nova.compute import instance_types |
278 | from nova.compute import power_state |
279 | import nova.api.openstack |
280 | @@ -95,7 +96,7 @@ |
281 | db_driver = FLAGS.db_driver |
282 | self.db_driver = utils.import_object(db_driver) |
283 | self.network_manager = utils.import_object(FLAGS.network_manager) |
284 | - self.compute_manager = utils.import_object(FLAGS.compute_manager) |
285 | + self.compute_api = compute_api.ComputeAPI() |
286 | super(Controller, self).__init__() |
287 | |
288 | def index(self, req): |
289 | @@ -140,22 +141,23 @@ |
290 | |
291 | def create(self, req): |
292 | """ Creates a new server for a given user """ |
293 | - |
294 | env = self._deserialize(req.body, req) |
295 | if not env: |
296 | return faults.Fault(exc.HTTPUnprocessableEntity()) |
297 | |
298 | - #try: |
299 | - inst = self._build_server_instance(req, env) |
300 | - #except Exception, e: |
301 | - # return faults.Fault(exc.HTTPUnprocessableEntity()) |
302 | - |
303 | user_id = req.environ['nova.context']['user']['id'] |
304 | - rpc.cast(context.RequestContext(user_id, user_id), |
305 | - FLAGS.compute_topic, |
306 | - {"method": "run_instance", |
307 | - "args": {"instance_id": inst['id']}}) |
308 | - return _entity_inst(inst) |
309 | + ctxt = context.RequestContext(user_id, user_id) |
310 | + key_pair = self.db_driver.key_pair_get_all_by_user(None, user_id)[0] |
311 | + instances = self.compute_api.create_instances(ctxt, |
312 | + instance_types.get_by_flavor_id(env['server']['flavorId']), |
313 | + utils.import_object(FLAGS.image_service), |
314 | + env['server']['imageId'], |
315 | + self._get_network_topic(ctxt), |
316 | + name=env['server']['name'], |
317 | + description=env['server']['name'], |
318 | + key_name=key_pair['name'], |
319 | + key_data=key_pair['public_key']) |
320 | + return _entity_inst(instances[0]) |
321 | |
322 | def update(self, req, id): |
323 | """ Updates the server name or password """ |
324 | @@ -191,78 +193,6 @@ |
325 | return faults.Fault(exc.HTTPUnprocessableEntity()) |
326 | cloud.reboot(id) |
327 | |
328 | - def _build_server_instance(self, req, env): |
329 | - """Build instance data structure and save it to the data store.""" |
330 | - ltime = time.strftime('%Y-%m-%dT%H:%M:%SZ', time.gmtime()) |
331 | - inst = {} |
332 | - |
333 | - user_id = req.environ['nova.context']['user']['id'] |
334 | - ctxt = context.RequestContext(user_id, user_id) |
335 | - |
336 | - flavor_id = env['server']['flavorId'] |
337 | - |
338 | - instance_type, flavor = [(k, v) for k, v in |
339 | - instance_types.INSTANCE_TYPES.iteritems() |
340 | - if v['flavorid'] == flavor_id][0] |
341 | - |
342 | - image_id = env['server']['imageId'] |
343 | - img_service = utils.import_object(FLAGS.image_service) |
344 | - |
345 | - image = img_service.show(image_id) |
346 | - |
347 | - if not image: |
348 | - raise Exception("Image not found") |
349 | - |
350 | - inst['image_id'] = image_id |
351 | - inst['user_id'] = user_id |
352 | - inst['launch_time'] = ltime |
353 | - inst['mac_address'] = utils.generate_mac() |
354 | - inst['project_id'] = user_id |
355 | - |
356 | - inst['state_description'] = 'scheduling' |
357 | - inst['kernel_id'] = image.get('kernelId', FLAGS.default_kernel) |
358 | - inst['ramdisk_id'] = image.get('ramdiskId', FLAGS.default_ramdisk) |
359 | - inst['reservation_id'] = utils.generate_uid('r') |
360 | - |
361 | - inst['display_name'] = env['server']['name'] |
362 | - inst['display_description'] = env['server']['name'] |
363 | - |
364 | - #TODO(dietz) this may be ill advised |
365 | - key_pair_ref = self.db_driver.key_pair_get_all_by_user( |
366 | - None, user_id)[0] |
367 | - |
368 | - inst['key_data'] = key_pair_ref['public_key'] |
369 | - inst['key_name'] = key_pair_ref['name'] |
370 | - |
371 | - #TODO(dietz) stolen from ec2 api, see TODO there |
372 | - inst['security_group'] = 'default' |
373 | - |
374 | - # Flavor related attributes |
375 | - inst['instance_type'] = instance_type |
376 | - inst['memory_mb'] = flavor['memory_mb'] |
377 | - inst['vcpus'] = flavor['vcpus'] |
378 | - inst['local_gb'] = flavor['local_gb'] |
379 | - inst['mac_address'] = utils.generate_mac() |
380 | - inst['launch_index'] = 0 |
381 | - |
382 | - ref = self.compute_manager.create_instance(ctxt, **inst) |
383 | - inst['id'] = ref['internal_id'] |
384 | - |
385 | - inst['hostname'] = str(ref['internal_id']) |
386 | - self.compute_manager.update_instance(ctxt, inst['id'], **inst) |
387 | - |
388 | - address = self.network_manager.allocate_fixed_ip(ctxt, |
389 | - inst['id']) |
390 | - |
391 | - # TODO(vish): This probably should be done in the scheduler |
392 | - # network is setup when host is assigned |
393 | - network_topic = self._get_network_topic(ctxt) |
394 | - rpc.call(ctxt, |
395 | - network_topic, |
396 | - {"method": "setup_fixed_ip", |
397 | - "args": {"address": address}}) |
398 | - return inst |
399 | - |
400 | def _get_network_topic(self, context): |
401 | """Retrieves the network host for a project""" |
402 | network_ref = self.network_manager.get_network(context) |
403 | |
404 | === added file 'nova/compute/api.py' |
405 | --- nova/compute/api.py 1970-01-01 00:00:00 +0000 |
406 | +++ nova/compute/api.py 2010-12-02 17:38:10 +0000 |
407 | @@ -0,0 +1,212 @@ |
408 | +# vim: tabstop=4 shiftwidth=4 softtabstop=4 |
409 | + |
410 | +# Copyright 2010 United States Government as represented by the |
411 | +# Administrator of the National Aeronautics and Space Administration. |
412 | +# All Rights Reserved. |
413 | +# |
414 | +# Licensed under the Apache License, Version 2.0 (the "License"); you may |
415 | +# not use this file except in compliance with the License. You may obtain |
416 | +# a copy of the License at |
417 | +# |
418 | +# http://www.apache.org/licenses/LICENSE-2.0 |
419 | +# |
420 | +# Unless required by applicable law or agreed to in writing, software |
421 | +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT |
422 | +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the |
423 | +# License for the specific language governing permissions and limitations |
424 | +# under the License. |
425 | + |
426 | +""" |
427 | +Handles all API requests relating to instances (guest vms). |
428 | +""" |
429 | + |
430 | +import logging |
431 | +import time |
432 | + |
433 | +from nova import db |
434 | +from nova import exception |
435 | +from nova import flags |
436 | +from nova import quota |
437 | +from nova import rpc |
438 | +from nova import utils |
439 | +from nova.compute import instance_types |
440 | +from nova.db import base |
441 | + |
442 | +FLAGS = flags.FLAGS |
443 | + |
444 | + |
445 | +def generate_default_hostname(internal_id): |
446 | + """Default function to generate a hostname given an instance reference.""" |
447 | + return str(internal_id) |
448 | + |
449 | + |
450 | +class ComputeAPI(base.Base): |
451 | + """API for interacting with the compute manager.""" |
452 | + |
453 | + def __init__(self, **kwargs): |
454 | + self.network_manager = utils.import_object(FLAGS.network_manager) |
455 | + super(ComputeAPI, self).__init__(**kwargs) |
456 | + |
457 | + # TODO(eday): network_topic arg should go away once we push network |
458 | + # allocation into the scheduler or compute worker. |
459 | + def create_instances(self, context, instance_type, image_service, image_id, |
460 | + network_topic, min_count=1, max_count=1, |
461 | + kernel_id=None, ramdisk_id=None, name='', |
462 | + description='', user_data='', key_name=None, |
463 | + key_data=None, security_group='default', |
464 | + generate_hostname=generate_default_hostname): |
465 | + """Create the number of instances requested if quote and |
466 | + other arguments check out ok.""" |
467 | + |
468 | + num_instances = quota.allowed_instances(context, max_count, |
469 | + instance_type) |
470 | + if num_instances < min_count: |
471 | + logging.warn("Quota exceeeded for %s, tried to run %s instances", |
472 | + context.project_id, min_count) |
473 | + raise quota.QuotaError("Instance quota exceeded. You can only " |
474 | + "run %s more instances of this type." % |
475 | + num_instances, "InstanceLimitExceeded") |
476 | + |
477 | + is_vpn = image_id == FLAGS.vpn_image_id |
478 | + if not is_vpn: |
479 | + image = image_service.show(context, image_id) |
480 | + if kernel_id is None: |
481 | + kernel_id = image.get('kernelId', FLAGS.default_kernel) |
482 | + if ramdisk_id is None: |
483 | + ramdisk_id = image.get('ramdiskId', FLAGS.default_ramdisk) |
484 | + |
485 | + # Make sure we have access to kernel and ramdisk |
486 | + image_service.show(context, kernel_id) |
487 | + image_service.show(context, ramdisk_id) |
488 | + |
489 | + if security_group is None: |
490 | + security_group = ['default'] |
491 | + if not type(security_group) is list: |
492 | + security_group = [security_group] |
493 | + |
494 | + security_groups = [] |
495 | + self.ensure_default_security_group(context) |
496 | + for security_group_name in security_group: |
497 | + group = db.security_group_get_by_name(context, |
498 | + context.project_id, |
499 | + security_group_name) |
500 | + security_groups.append(group['id']) |
501 | + |
502 | + if key_data is None and key_name: |
503 | + key_pair = db.key_pair_get(context, context.user_id, key_name) |
504 | + key_data = key_pair['public_key'] |
505 | + |
506 | + type_data = instance_types.INSTANCE_TYPES[instance_type] |
507 | + base_options = { |
508 | + 'reservation_id': utils.generate_uid('r'), |
509 | + 'image_id': image_id, |
510 | + 'kernel_id': kernel_id, |
511 | + 'ramdisk_id': ramdisk_id, |
512 | + 'state_description': 'scheduling', |
513 | + 'user_id': context.user_id, |
514 | + 'project_id': context.project_id, |
515 | + 'launch_time': time.strftime('%Y-%m-%dT%H:%M:%SZ', time.gmtime()), |
516 | + 'instance_type': instance_type, |
517 | + 'memory_mb': type_data['memory_mb'], |
518 | + 'vcpus': type_data['vcpus'], |
519 | + 'local_gb': type_data['local_gb'], |
520 | + 'display_name': name, |
521 | + 'display_description': description, |
522 | + 'key_name': key_name, |
523 | + 'key_data': key_data} |
524 | + |
525 | + elevated = context.elevated() |
526 | + instances = [] |
527 | + logging.debug("Going to run %s instances...", num_instances) |
528 | + for num in range(num_instances): |
529 | + instance = dict(mac_address=utils.generate_mac(), |
530 | + launch_index=num, |
531 | + **base_options) |
532 | + instance_ref = self.create_instance(context, security_groups, |
533 | + **instance) |
534 | + instance_id = instance_ref['id'] |
535 | + internal_id = instance_ref['internal_id'] |
536 | + hostname = generate_hostname(internal_id) |
537 | + self.update_instance(context, instance_id, hostname=hostname) |
538 | + instances.append(dict(id=instance_id, internal_id=internal_id, |
539 | + hostname=hostname, **instance)) |
540 | + |
541 | + # TODO(vish): This probably should be done in the scheduler |
542 | + # or in compute as a call. The network should be |
543 | + # allocated after the host is assigned and setup |
544 | + # can happen at the same time. |
545 | + address = self.network_manager.allocate_fixed_ip(context, |
546 | + instance_id, |
547 | + is_vpn) |
548 | + rpc.cast(elevated, |
549 | + network_topic, |
550 | + {"method": "setup_fixed_ip", |
551 | + "args": {"address": address}}) |
552 | + |
553 | + logging.debug("Casting to scheduler for %s/%s's instance %s" % |
554 | + (context.project_id, context.user_id, instance_id)) |
555 | + rpc.cast(context, |
556 | + FLAGS.scheduler_topic, |
557 | + {"method": "run_instance", |
558 | + "args": {"topic": FLAGS.compute_topic, |
559 | + "instance_id": instance_id}}) |
560 | + |
561 | + return instances |
562 | + |
563 | + def ensure_default_security_group(self, context): |
564 | + try: |
565 | + db.security_group_get_by_name(context, context.project_id, |
566 | + 'default') |
567 | + except exception.NotFound: |
568 | + values = {'name': 'default', |
569 | + 'description': 'default', |
570 | + 'user_id': context.user_id, |
571 | + 'project_id': context.project_id} |
572 | + group = db.security_group_create(context, values) |
573 | + |
574 | + def create_instance(self, context, security_groups=None, **kwargs): |
575 | + """Creates the instance in the datastore and returns the |
576 | + new instance as a mapping |
577 | + |
578 | + :param context: The security context |
579 | + :param security_groups: list of security group ids to |
580 | + attach to the instance |
581 | + :param kwargs: All additional keyword args are treated |
582 | + as data fields of the instance to be |
583 | + created |
584 | + |
585 | + :retval Returns a mapping of the instance information |
586 | + that has just been created |
587 | + |
588 | + """ |
589 | + instance_ref = self.db.instance_create(context, kwargs) |
590 | + inst_id = instance_ref['id'] |
591 | + # Set sane defaults if not specified |
592 | + if kwargs.get('display_name') is None: |
593 | + display_name = "Server %s" % instance_ref['internal_id'] |
594 | + instance_ref['display_name'] = display_name |
595 | + self.db.instance_update(context, inst_id, |
596 | + {'display_name': display_name}) |
597 | + |
598 | + elevated = context.elevated() |
599 | + if not security_groups: |
600 | + security_groups = [] |
601 | + for security_group_id in security_groups: |
602 | + self.db.instance_add_security_group(elevated, |
603 | + inst_id, |
604 | + security_group_id) |
605 | + return instance_ref |
606 | + |
607 | + def update_instance(self, context, instance_id, **kwargs): |
608 | + """Updates the instance in the datastore. |
609 | + |
610 | + :param context: The security context |
611 | + :param instance_id: ID of the instance to update |
612 | + :param kwargs: All additional keyword args are treated |
613 | + as data fields of the instance to be |
614 | + updated |
615 | + |
616 | + :retval None |
617 | + |
618 | + """ |
619 | + self.db.instance_update(context, instance_id, kwargs) |
620 | |
621 | === modified file 'nova/compute/instance_types.py' |
622 | --- nova/compute/instance_types.py 2010-10-18 22:58:42 +0000 |
623 | +++ nova/compute/instance_types.py 2010-12-02 17:38:10 +0000 |
624 | @@ -21,9 +21,29 @@ |
625 | The built-in instance properties. |
626 | """ |
627 | |
628 | +from nova import flags |
629 | + |
630 | +FLAGS = flags.FLAGS |
631 | INSTANCE_TYPES = { |
632 | 'm1.tiny': dict(memory_mb=512, vcpus=1, local_gb=0, flavorid=1), |
633 | 'm1.small': dict(memory_mb=2048, vcpus=1, local_gb=20, flavorid=2), |
634 | 'm1.medium': dict(memory_mb=4096, vcpus=2, local_gb=40, flavorid=3), |
635 | 'm1.large': dict(memory_mb=8192, vcpus=4, local_gb=80, flavorid=4), |
636 | 'm1.xlarge': dict(memory_mb=16384, vcpus=8, local_gb=160, flavorid=5)} |
637 | + |
638 | + |
639 | +def get_by_type(instance_type): |
640 | + """Build instance data structure and save it to the data store.""" |
641 | + if instance_type is None: |
642 | + return FLAGS.default_instance_type |
643 | + if instance_type not in INSTANCE_TYPES: |
644 | + raise exception.ApiError("Unknown instance type: %s", |
645 | + instance_type) |
646 | + return instance_type |
647 | + |
648 | + |
649 | +def get_by_flavor_id(flavor_id): |
650 | + for instance_type, details in INSTANCE_TYPES.iteritems(): |
651 | + if details['flavorid'] == flavor_id: |
652 | + return instance_type |
653 | + return FLAGS.default_instance_type |
654 | |
655 | === modified file 'nova/compute/manager.py' |
656 | --- nova/compute/manager.py 2010-12-02 16:08:56 +0000 |
657 | +++ nova/compute/manager.py 2010-12-02 17:38:10 +0000 |
658 | @@ -39,13 +39,13 @@ |
659 | |
660 | from twisted.internet import defer |
661 | |
662 | +from nova import db |
663 | from nova import exception |
664 | from nova import flags |
665 | from nova import manager |
666 | from nova import utils |
667 | from nova.compute import power_state |
668 | |
669 | - |
670 | FLAGS = flags.FLAGS |
671 | flags.DEFINE_string('instances_path', '$state_path/instances', |
672 | 'where instances are stored on disk') |
673 | @@ -84,53 +84,6 @@ |
674 | """This call passes stright through to the virtualization driver.""" |
675 | yield self.driver.refresh_security_group(security_group_id) |
676 | |
677 | - def create_instance(self, context, security_groups=None, **kwargs): |
678 | - """Creates the instance in the datastore and returns the |
679 | - new instance as a mapping |
680 | - |
681 | - :param context: The security context |
682 | - :param security_groups: list of security group ids to |
683 | - attach to the instance |
684 | - :param kwargs: All additional keyword args are treated |
685 | - as data fields of the instance to be |
686 | - created |
687 | - |
688 | - :retval Returns a mapping of the instance information |
689 | - that has just been created |
690 | - |
691 | - """ |
692 | - instance_ref = self.db.instance_create(context, kwargs) |
693 | - inst_id = instance_ref['id'] |
694 | - # Set sane defaults if not specified |
695 | - if kwargs.get('display_name') is None: |
696 | - display_name = "Server %s" % instance_ref['internal_id'] |
697 | - instance_ref['display_name'] = display_name |
698 | - self.db.instance_update(context, inst_id, |
699 | - {'display_name': display_name}) |
700 | - |
701 | - elevated = context.elevated() |
702 | - if not security_groups: |
703 | - security_groups = [] |
704 | - for security_group_id in security_groups: |
705 | - self.db.instance_add_security_group(elevated, |
706 | - inst_id, |
707 | - security_group_id) |
708 | - return instance_ref |
709 | - |
710 | - def update_instance(self, context, instance_id, **kwargs): |
711 | - """Updates the instance in the datastore. |
712 | - |
713 | - :param context: The security context |
714 | - :param instance_id: ID of the instance to update |
715 | - :param kwargs: All additional keyword args are treated |
716 | - as data fields of the instance to be |
717 | - updated |
718 | - |
719 | - :retval None |
720 | - |
721 | - """ |
722 | - self.db.instance_update(context, instance_id, kwargs) |
723 | - |
724 | @defer.inlineCallbacks |
725 | @exception.wrap_exception |
726 | def run_instance(self, context, instance_id, **_kwargs): |
727 | |
728 | === added file 'nova/db/base.py' |
729 | --- nova/db/base.py 1970-01-01 00:00:00 +0000 |
730 | +++ nova/db/base.py 2010-12-02 17:38:10 +0000 |
731 | @@ -0,0 +1,36 @@ |
732 | +# vim: tabstop=4 shiftwidth=4 softtabstop=4 |
733 | + |
734 | +# Copyright 2010 United States Government as represented by the |
735 | +# Administrator of the National Aeronautics and Space Administration. |
736 | +# All Rights Reserved. |
737 | +# |
738 | +# Licensed under the Apache License, Version 2.0 (the "License"); you may |
739 | +# not use this file except in compliance with the License. You may obtain |
740 | +# a copy of the License at |
741 | +# |
742 | +# http://www.apache.org/licenses/LICENSE-2.0 |
743 | +# |
744 | +# Unless required by applicable law or agreed to in writing, software |
745 | +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT |
746 | +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the |
747 | +# License for the specific language governing permissions and limitations |
748 | +# under the License. |
749 | + |
750 | +""" |
751 | +Base class for classes that need modular database access. |
752 | +""" |
753 | + |
754 | +from nova import utils |
755 | +from nova import flags |
756 | + |
757 | +FLAGS = flags.FLAGS |
758 | +flags.DEFINE_string('db_driver', 'nova.db.api', |
759 | + 'driver to use for database access') |
760 | + |
761 | + |
762 | +class Base(object): |
763 | + """DB driver is injected in the init method""" |
764 | + def __init__(self, db_driver=None): |
765 | + if not db_driver: |
766 | + db_driver = FLAGS.db_driver |
767 | + self.db = utils.import_object(db_driver) # pylint: disable-msg=C0103 |
768 | |
769 | === modified file 'nova/manager.py' |
770 | --- nova/manager.py 2010-11-07 19:51:40 +0000 |
771 | +++ nova/manager.py 2010-12-02 17:38:10 +0000 |
772 | @@ -53,23 +53,19 @@ |
773 | |
774 | from nova import utils |
775 | from nova import flags |
776 | +from nova.db import base |
777 | |
778 | from twisted.internet import defer |
779 | |
780 | FLAGS = flags.FLAGS |
781 | -flags.DEFINE_string('db_driver', 'nova.db.api', |
782 | - 'driver to use for volume creation') |
783 | - |
784 | - |
785 | -class Manager(object): |
786 | - """DB driver is injected in the init method""" |
787 | + |
788 | + |
789 | +class Manager(base.Base): |
790 | def __init__(self, host=None, db_driver=None): |
791 | if not host: |
792 | host = FLAGS.host |
793 | self.host = host |
794 | - if not db_driver: |
795 | - db_driver = FLAGS.db_driver |
796 | - self.db = utils.import_object(db_driver) # pylint: disable-msg=C0103 |
797 | + super(Manager, self).__init__(db_driver) |
798 | |
799 | @defer.inlineCallbacks |
800 | def periodic_tasks(self, context=None): |
801 | |
802 | === modified file 'nova/quota.py' |
803 | --- nova/quota.py 2010-10-21 18:49:51 +0000 |
804 | +++ nova/quota.py 2010-12-02 17:38:10 +0000 |
805 | @@ -94,3 +94,8 @@ |
806 | quota = get_quota(context, project_id) |
807 | allowed_floating_ips = quota['floating_ips'] - used_floating_ips |
808 | return min(num_floating_ips, allowed_floating_ips) |
809 | + |
810 | + |
811 | +class QuotaError(exception.ApiError): |
812 | + """Quota Exceeeded""" |
813 | + pass |
814 | |
815 | === modified file 'nova/tests/api/openstack/fakes.py' |
816 | --- nova/tests/api/openstack/fakes.py 2010-11-30 18:52:46 +0000 |
817 | +++ nova/tests/api/openstack/fakes.py 2010-12-02 17:38:10 +0000 |
818 | @@ -72,7 +72,7 @@ |
819 | |
820 | |
821 | def stub_out_image_service(stubs): |
822 | - def fake_image_show(meh, id): |
823 | + def fake_image_show(meh, context, id): |
824 | return dict(kernelId=1, ramdiskId=1) |
825 | |
826 | stubs.Set(nova.image.local.LocalImageService, 'show', fake_image_show) |
827 | |
828 | === modified file 'nova/tests/api/openstack/test_servers.py' |
829 | --- nova/tests/api/openstack/test_servers.py 2010-12-01 20:18:24 +0000 |
830 | +++ nova/tests/api/openstack/test_servers.py 2010-12-02 17:38:10 +0000 |
831 | @@ -43,6 +43,10 @@ |
832 | return [stub_instance(i, user_id) for i in xrange(5)] |
833 | |
834 | |
835 | +def return_security_group(context, instance_id, security_group_id): |
836 | + pass |
837 | + |
838 | + |
839 | def stub_instance(id, user_id=1): |
840 | return Instance(id=id, state=0, image_id=10, display_name='server%s' % id, |
841 | user_id=user_id) |
842 | @@ -63,6 +67,8 @@ |
843 | return_server) |
844 | self.stubs.Set(nova.db.api, 'instance_get_all_by_user', |
845 | return_servers) |
846 | + self.stubs.Set(nova.db.api, 'instance_add_security_group', |
847 | + return_security_group) |
848 | |
849 | def tearDown(self): |
850 | self.stubs.UnsetAll() |
851 | |
852 | === modified file 'nova/tests/compute_unittest.py' |
853 | --- nova/tests/compute_unittest.py 2010-12-02 16:08:56 +0000 |
854 | +++ nova/tests/compute_unittest.py 2010-12-02 17:38:10 +0000 |
855 | @@ -31,6 +31,7 @@ |
856 | from nova import test |
857 | from nova import utils |
858 | from nova.auth import manager |
859 | +from nova.compute import api as compute_api |
860 | |
861 | FLAGS = flags.FLAGS |
862 | |
863 | @@ -43,6 +44,7 @@ |
864 | self.flags(connection_type='fake', |
865 | network_manager='nova.network.manager.FlatManager') |
866 | self.compute = utils.import_object(FLAGS.compute_manager) |
867 | + self.compute_api = compute_api.ComputeAPI() |
868 | self.manager = manager.AuthManager() |
869 | self.user = self.manager.create_user('fake', 'fake', 'fake') |
870 | self.project = self.manager.create_project('fake', 'fake', 'fake') |
871 | @@ -70,7 +72,8 @@ |
872 | """Verify that an instance cannot be created without a display_name.""" |
873 | cases = [dict(), dict(display_name=None)] |
874 | for instance in cases: |
875 | - ref = self.compute.create_instance(self.context, None, **instance) |
876 | + ref = self.compute_api.create_instance(self.context, None, |
877 | + **instance) |
878 | try: |
879 | self.assertNotEqual(ref.display_name, None) |
880 | finally: |
881 | @@ -86,9 +89,9 @@ |
882 | 'user_id': self.user.id, |
883 | 'project_id': self.project.id} |
884 | group = db.security_group_create(self.context, values) |
885 | - ref = self.compute.create_instance(self.context, |
886 | - security_groups=[group['id']], |
887 | - **inst) |
888 | + ref = self.compute_api.create_instance(self.context, |
889 | + security_groups=[group['id']], |
890 | + **inst) |
891 | # reload to get groups |
892 | instance_ref = db.instance_get(self.context, ref['id']) |
893 | try: |
894 | |
895 | === modified file 'nova/tests/quota_unittest.py' |
896 | --- nova/tests/quota_unittest.py 2010-11-17 21:23:12 +0000 |
897 | +++ nova/tests/quota_unittest.py 2010-12-02 17:38:10 +0000 |
898 | @@ -94,11 +94,12 @@ |
899 | for i in range(FLAGS.quota_instances): |
900 | instance_id = self._create_instance() |
901 | instance_ids.append(instance_id) |
902 | - self.assertRaises(cloud.QuotaError, self.cloud.run_instances, |
903 | + self.assertRaises(quota.QuotaError, self.cloud.run_instances, |
904 | self.context, |
905 | min_count=1, |
906 | max_count=1, |
907 | - instance_type='m1.small') |
908 | + instance_type='m1.small', |
909 | + image_id='fake') |
910 | for instance_id in instance_ids: |
911 | db.instance_destroy(self.context, instance_id) |
912 | |
913 | @@ -106,11 +107,12 @@ |
914 | instance_ids = [] |
915 | instance_id = self._create_instance(cores=4) |
916 | instance_ids.append(instance_id) |
917 | - self.assertRaises(cloud.QuotaError, self.cloud.run_instances, |
918 | + self.assertRaises(quota.QuotaError, self.cloud.run_instances, |
919 | self.context, |
920 | min_count=1, |
921 | max_count=1, |
922 | - instance_type='m1.small') |
923 | + instance_type='m1.small', |
924 | + image_id='fake') |
925 | for instance_id in instance_ids: |
926 | db.instance_destroy(self.context, instance_id) |
927 | |
928 | @@ -119,7 +121,7 @@ |
929 | for i in range(FLAGS.quota_volumes): |
930 | volume_id = self._create_volume() |
931 | volume_ids.append(volume_id) |
932 | - self.assertRaises(cloud.QuotaError, self.cloud.create_volume, |
933 | + self.assertRaises(quota.QuotaError, self.cloud.create_volume, |
934 | self.context, |
935 | size=10) |
936 | for volume_id in volume_ids: |
937 | @@ -129,7 +131,7 @@ |
938 | volume_ids = [] |
939 | volume_id = self._create_volume(size=20) |
940 | volume_ids.append(volume_id) |
941 | - self.assertRaises(cloud.QuotaError, |
942 | + self.assertRaises(quota.QuotaError, |
943 | self.cloud.create_volume, |
944 | self.context, |
945 | size=10) |
946 | @@ -146,6 +148,6 @@ |
947 | # make an rpc.call, the test just finishes with OK. It |
948 | # appears to be something in the magic inline callbacks |
949 | # that is breaking. |
950 | - self.assertRaises(cloud.QuotaError, self.cloud.allocate_address, |
951 | + self.assertRaises(quota.QuotaError, self.cloud.allocate_address, |
952 | self.context) |
953 | db.floating_ip_destroy(context.get_admin_context(), address) |
Questions on irc were addressed by 424. LGTM. We definitely need to clean up cloud unittests.