iallocator.rst 13.1 KB
Newer Older
1
2
3
Ganeti automatic instance allocation
====================================

4
Documents Ganeti version 2.1
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74

.. contents::

Introduction
------------

Currently in Ganeti the admin has to specify the exact locations for
an instance's node(s). This prevents a completely automatic node
evacuation, and is in general a nuisance.

The *iallocator* framework will enable automatic placement via
external scripts, which allows customization of the cluster layout per
the site's requirements.

User-visible changes
~~~~~~~~~~~~~~~~~~~~

There are two parts of the ganeti operation that are impacted by the
auto-allocation: how the cluster knows what the allocator algorithms
are and how the admin uses these in creating instances.

An allocation algorithm is just the filename of a program installed in
a defined list of directories.

Cluster configuration
~~~~~~~~~~~~~~~~~~~~~

At configure time, the list of the directories can be selected via the
``--with-iallocator-search-path=LIST`` option, where *LIST* is a
comma-separated list of directories. If not given, this defaults to
``$libdir/ganeti/iallocators``, i.e. for an installation under
``/usr``, this will be ``/usr/lib/ganeti/iallocators``.

Ganeti will then search for allocator script in the configured list,
using the first one whose filename matches the one given by the user.

Command line interface changes
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

The node selection options in instanece add and instance replace disks
can be replace by the new ``--iallocator=NAME`` option (shortened to
``-I``), which will cause the auto-assignement of nodes with the
passed iallocator. The selected node(s) will be show as part of the
command output.

IAllocator API
--------------

The protocol for communication between Ganeti and an allocator script
will be the following:

#. ganeti launches the program with a single argument, a filename that
   contains a JSON-encoded structure (the input message)

#. if the script finishes with exit code different from zero, it is
   considered a general failure and the full output will be reported to
   the users; this can be the case when the allocator can't parse the
   input message

#. if the allocator finishes with exit code zero, it is expected to
   output (on its stdout) a JSON-encoded structure (the response)

Input message
~~~~~~~~~~~~~

The input message will be the JSON encoding of a dictionary containing
the following:

version
  the version of the protocol; this document
75
  specifies version 2
76
77
78
79
80
81
82

cluster_name
  the cluster name

cluster_tags
  the list of cluster tags

83
84
85
enabled_hypervisors
  the list of enabled hypervisors

86
87
88
89
request
  a dictionary containing the request data:

  type
90
91
92
93
94
95
96
97
98
    the request type; this can be either ``allocate``, ``relocate`` or
    ``multi-evacuate``; the ``allocate`` request is used when a new
    instance needs to be placed on the cluster, while the ``relocate``
    request is used when an existing instance needs to be moved within
    the cluster; the ``multi-evacuate`` protocol requests that the
    script computes the optimal relocate solution for all secondary
    instances of the given nodes

  The following keys are needed in allocate/relocate mode:
99
100

  name
101
102
103
    the name of the instance; if the request is a realocation, then this
    name will be found in the list of instances (see below), otherwise
    is the FQDN of the new instance
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158

  required_nodes
    how many nodes should the algorithm return; while this information
    can be deduced from the instace's disk template, it's better if
    this computation is left to Ganeti as then allocator scripts are
    less sensitive to changes to the disk templates

  disk_space_total
    the total disk space that will be used by this instance on the
    (new) nodes; again, this information can be computed from the list
    of instance disks and its template type, but Ganeti is better
    suited to compute it

  If the request is an allocation, then there are extra fields in the
  request dictionary:

  disks
    list of dictionaries holding the disk definitions for this
    instance (in the order they are exported to the hypervisor):

    mode
      either ``r`` or ``w`` denoting if the disk is read-only or
      writable

    size
      the size of this disk in mebibytes

  nics
    a list of dictionaries holding the network interfaces for this
    instance, containing:

    ip
      the IP address that Ganeti know for this instance, or null

    mac
      the MAC address for this interface

    bridge
      the bridge to which this interface will be connected

  vcpus
    the number of VCPUs for the instance

  disk_template
    the disk template for the instance

  memory
   the memory size for the instance

  os
   the OS type for the instance

  tags
    the list of the instance's tags

159
160
161
  hypervisor
    the hypervisor of this instance

162
163
164
165
166
167
168

  If the request is of type relocate, then there is one more entry in
  the request dictionary, named ``relocate_from``, and it contains a
  list of nodes to move the instance away from; note that with Ganeti
  2.0, this list will always contain a single node, the current
  secondary of the instance.

169
170
171
172
173
  The multi-evacuate mode has instead a single request argument:

  nodes
    the names of the nodes to be evacuated

174
175
176
177
178
179
180
181
nodegroups
  a dictionary with the data for the cluster's node groups; it is keyed
  on the group UUID, and the values are a dictionary with the following
  keys:

  name
    the node group name

182
183
184
185
186
instances
  a dictionary with the data for the current existing instance on the
  cluster, indexed by instance name; the contents are similar to the
  instance definitions for the allocate mode, with the addition of:

187
  admin_up
188
189
190
191
192
193
194
195
196
    if this instance is set to run (but not the actual status of the
    instance)

  nodes
    list of nodes on which this instance is placed; the primary node
    of the instance is always the first one

nodes
  dictionary with the data for the nodes in the cluster, indexed by
197
  the node name; the dict contains [*]_ :
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226

  total_disk
    the total disk size of this node (mebibytes)

  free_disk
    the free disk space on the node

  total_memory
    the total memory size

  free_memory
    free memory on the node; note that currently this does not take
    into account the instances which are down on the node

  total_cpus
    the physical number of CPUs present on the machine; depending on
    the hypervisor, this might or might not be equal to how many CPUs
    the node operating system sees;

  primary_ip
    the primary IP address of the node

  secondary_ip
    the secondary IP address of the node (the one used for the DRBD
    replication); note that this can be the same as the primary one

  tags
    list with the tags of the node

227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
  master_candidate:
    a boolean flag denoting whether this node is a master candidate

  drained:
    a boolean flag denoting whether this node is being drained

  offline:
    a boolean flag denoting whether this node is offline

  i_pri_memory:
    total memory required by primary instances

  i_pri_up_memory:
    total memory required by running primary instances

242
243
244
  group:
    the node group that this node belongs to

245
246
  No allocations should be made on nodes having either the ``drained``
  or ``offline`` flags set. More details about these of node status
Iustin Pop's avatar
Iustin Pop committed
247
  flags is available in the manpage :manpage:`ganeti(7)`.
248

249
250
251
252
.. [*] Note that no run-time data is present for offline, drained or
   non-vm_capable nodes; this means the tags total_memory,
   reserved_memory, free_memory, total_disk, free_disk, total_cpus,
   i_pri_memory and i_pri_up memory will be absent
253

254
255
256

Response message
~~~~~~~~~~~~~~~~
257
258
259
260
261

The response message is much more simple than the input one. It is
also a dict having three keys:

success
Michael Hanselmann's avatar
Michael Hanselmann committed
262
  a boolean value denoting if the allocation was successful or not
263
264
265
266
267

info
  a string with information from the scripts; if the allocation fails,
  this will be shown to the user

268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
result
  the output of the algorithm; even if the algorithm failed
  (i.e. success is false), this must be returned as an empty list

  for allocate/relocate, this is the list of node(s) for the instance;
  note that the length of this list must equal the ``requested_nodes``
  entry in the input message, otherwise Ganeti will consider the result
  as failed

  for multi-evacuation mode, this is a list of lists; each element of
  the list is a list of instance name and the new secondary node

.. note:: Current Ganeti version accepts either ``result`` or ``nodes``
   as a backwards-compatibility measure (older versions only supported
   ``nodes``)
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388

Examples
--------

Input messages to scripts
~~~~~~~~~~~~~~~~~~~~~~~~~

Input message, new instance allocation::

  {
    "cluster_tags": [],
    "request": {
      "required_nodes": 2,
      "name": "instance3.example.com",
      "tags": [
        "type:test",
        "owner:foo"
      ],
      "type": "allocate",
      "disks": [
        {
          "mode": "w",
          "size": 1024
        },
        {
          "mode": "w",
          "size": 2048
        }
      ],
      "nics": [
        {
          "ip": null,
          "mac": "00:11:22:33:44:55",
          "bridge": null
        }
      ],
      "vcpus": 1,
      "disk_template": "drbd",
      "memory": 2048,
      "disk_space_total": 3328,
      "os": "etch-image"
    },
    "cluster_name": "cluster1.example.com",
    "instances": {
      "instance1.example.com": {
        "tags": [],
        "should_run": false,
        "disks": [
          {
            "mode": "w",
            "size": 64
          },
          {
            "mode": "w",
            "size": 512
          }
        ],
        "nics": [
          {
            "ip": null,
            "mac": "aa:00:00:00:60:bf",
            "bridge": "xen-br0"
          }
        ],
        "vcpus": 1,
        "disk_template": "plain",
        "memory": 128,
        "nodes": [
          "nodee1.com"
        ],
        "os": "etch-image"
      },
      "instance2.example.com": {
        "tags": [],
        "should_run": false,
        "disks": [
          {
            "mode": "w",
            "size": 512
          },
          {
            "mode": "w",
            "size": 256
          }
        ],
        "nics": [
          {
            "ip": null,
            "mac": "aa:00:00:55:f8:38",
            "bridge": "xen-br0"
          }
        ],
        "vcpus": 1,
        "disk_template": "drbd",
        "memory": 512,
        "nodes": [
          "node2.example.com",
          "node3.example.com"
        ],
        "os": "etch-image"
      }
    },
    "version": 1,
    "nodes": {
      "node1.example.com": {
        "total_disk": 858276,
389
390
        "primary_ip": "198.51.100.1",
        "secondary_ip": "192.0.2.1",
391
392
393
394
395
396
397
        "tags": [],
        "free_memory": 3505,
        "free_disk": 856740,
        "total_memory": 4095
      },
      "node2.example.com": {
        "total_disk": 858240,
398
399
        "primary_ip": "198.51.100.2",
        "secondary_ip": "192.0.2.2",
400
401
402
403
404
405
406
        "tags": ["test"],
        "free_memory": 3505,
        "free_disk": 848320,
        "total_memory": 4095
      },
      "node3.example.com.com": {
        "total_disk": 572184,
407
408
        "primary_ip": "198.51.100.3",
        "secondary_ip": "192.0.2.3",
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
        "tags": [],
        "free_memory": 3505,
        "free_disk": 570648,
        "total_memory": 4095
      }
    }
  }

Input message, reallocation. Since only the request entry in the input
message is changed, we show only this changed entry::

  "request": {
    "relocate_from": [
      "node3.example.com"
    ],
    "required_nodes": 1,
    "type": "relocate",
    "name": "instance2.example.com",
    "disk_space_total": 832
  },


431
432
433
434
435
436
437
438
439
440
Input message, node evacuation::

  "request": {
    "evac_nodes": [
      "node2"
    ],
    "type": "multi-evacuate"
  },


441
442
443
444
445
446
Response messages
~~~~~~~~~~~~~~~~~
Successful response message::

  {
    "info": "Allocation successful",
447
    "result": [
448
449
450
451
452
453
454
455
456
457
      "node2.example.com",
      "node1.example.com"
    ],
    "success": true
  }

Failed response message::

  {
    "info": "Can't find a suitable node for position 2 (already selected: node2.example.com)",
458
    "result": [],
459
460
461
    "success": false
  }

462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
Successful node evacuation message::

  {
    "info": "Request successful",
    "result": [
      [
        "instance1",
        "node3"
      ],
      [
        "instance2",
        "node1"
      ]
    ],
    "success": true
  }


480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
Command line messages
~~~~~~~~~~~~~~~~~~~~~
::

  # gnt-instance add -t plain -m 2g --os-size 1g --swap-size 512m --iallocator dumb-allocator -o etch-image instance3
  Selected nodes for the instance: node1.example.com
  * creating instance disks...
  [...]

  # gnt-instance add -t plain -m 3400m --os-size 1g --swap-size 512m --iallocator dumb-allocator -o etch-image instance4
  Failure: prerequisites not met for this operation:
  Can't compute nodes using iallocator 'dumb-allocator': Can't find a suitable node for position 1 (already selected: )

  # gnt-instance add -t drbd -m 1400m --os-size 1g --swap-size 512m --iallocator dumb-allocator -o etch-image instance5
  Failure: prerequisites not met for this operation:
  Can't compute nodes using iallocator 'dumb-allocator': Can't find a suitable node for position 2 (already selected: node1.example.com)
496
497

.. vim: set textwidth=72 :
498
499
500
501
.. Local Variables:
.. mode: rst
.. fill-column: 72
.. End: