design-2.0.rst 75.4 KB
Newer Older
1 2 3 4 5 6 7 8 9 10
=================
Ganeti 2.0 design
=================

This document describes the major changes in Ganeti 2.0 compared to
the 1.2 version.

The 2.0 version will constitute a rewrite of the 'core' architecture,
paving the way for additional features in future 2.x versions.

11
.. contents:: :depth: 3
12 13 14 15 16 17 18 19 20 21 22 23 24

Objective
=========

Ganeti 1.2 has many scalability issues and restrictions due to its
roots as software for managing small and 'static' clusters.

Version 2.0 will attempt to remedy first the scalability issues and
then the restrictions.

Background
==========

Iustin Pop's avatar
Iustin Pop committed
25
While Ganeti 1.2 is usable, it severely limits the flexibility of the
26 27 28 29 30 31 32
cluster administration and imposes a very rigid model. It has the
following main scalability issues:

- only one operation at a time on the cluster [#]_
- poor handling of node failures in the cluster
- mixing hypervisors in a cluster not allowed

33 34
It also has a number of artificial restrictions, due to historical
design:
35 36

- fixed number of disks (two) per instance
Iustin Pop's avatar
Iustin Pop committed
37
- fixed number of NICs
38 39 40 41 42

.. [#] Replace disks will release the lock, but this is an exception
       and not a recommended way to operate

The 2.0 version is intended to address some of these problems, and
Iustin Pop's avatar
Iustin Pop committed
43 44 45 46 47 48 49
create a more flexible code base for future developments.

Among these problems, the single-operation at a time restriction is
biggest issue with the current version of Ganeti. It is such a big
impediment in operating bigger clusters that many times one is tempted
to remove the lock just to do a simple operation like start instance
while an OS installation is running.
50 51 52 53 54 55 56 57 58

Scalability problems
--------------------

Ganeti 1.2 has a single global lock, which is used for all cluster
operations.  This has been painful at various times, for example:

- It is impossible for two people to efficiently interact with a cluster
  (for example for debugging) at the same time.
59 60
- When batch jobs are running it's impossible to do other work (for
  example failovers/fixes) on a cluster.
61 62 63 64 65 66 67 68 69

This poses scalability problems: as clusters grow in node and instance
size it's a lot more likely that operations which one could conceive
should run in parallel (for example because they happen on different
nodes) are actually stalling each other while waiting for the global
lock, without a real reason for that to happen.

One of the main causes of this global lock (beside the higher
difficulty of ensuring data consistency in a more granular lock model)
Iustin Pop's avatar
Iustin Pop committed
70 71 72 73
is the fact that currently there is no long-lived process in Ganeti
that can coordinate multiple operations. Each command tries to acquire
the so called *cmd* lock and when it succeeds, it takes complete
ownership of the cluster configuration and state.
74 75 76 77 78 79 80 81 82 83 84 85 86

Other scalability problems are due the design of the DRBD device
model, which assumed at its creation a low (one to four) number of
instances per node, which is no longer true with today's hardware.

Artificial restrictions
-----------------------

Ganeti 1.2 (and previous versions) have a fixed two-disks, one-NIC per
instance model. This is a purely artificial restrictions, but it
touches multiple areas (configuration, import/export, command line)
that it's more fitted to a major release than a minor one.

Iustin Pop's avatar
Iustin Pop committed
87 88 89 90 91 92 93 94
Architecture issues
-------------------

The fact that each command is a separate process that reads the
cluster state, executes the command, and saves the new state is also
an issue on big clusters where the configuration data for the cluster
begins to be non-trivial in size.

95 96 97 98 99 100 101 102
Overview
========

In order to solve the scalability problems, a rewrite of the core
design of Ganeti is required. While the cluster operations themselves
won't change (e.g. start instance will do the same things, the way
these operations are scheduled internally will change radically.

103 104 105 106 107 108 109 110
The new design will change the cluster architecture to:

.. image:: arch-2.0.png

This differs from the 1.2 architecture by the addition of the master
daemon, which will be the only entity to talk to the node daemons.


111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
Detailed design
===============

The changes for 2.0 can be split into roughly three areas:

- core changes that affect the design of the software
- features (or restriction removals) but which do not have a wide
  impact on the design
- user-level and API-level changes which translate into differences for
  the operation of the cluster

Core changes
------------

The main changes will be switching from a per-process model to a
daemon based model, where the individual gnt-* commands will be
Iustin Pop's avatar
Iustin Pop committed
127 128 129 130 131 132
clients that talk to this daemon (see `Master daemon`_). This will
allow us to get rid of the global cluster lock for most operations,
having instead a per-object lock (see `Granular locking`_). Also, the
daemon will be able to queue jobs, and this will allow the individual
clients to submit jobs without waiting for them to finish, and also
see the result of old requests (see `Job Queue`_).
133 134 135

Beside these major changes, another 'core' change but that will not be
as visible to the users will be changing the model of object attribute
Iustin Pop's avatar
Iustin Pop committed
136
storage, and separate that into name spaces (such that an Xen PVM
137
instance will not have the Xen HVM parameters). This will allow future
Iustin Pop's avatar
Iustin Pop committed
138 139
flexibility in defining additional parameters. For more details see
`Object parameters`_.
140 141 142

The various changes brought in by the master daemon model and the
read-write RAPI will require changes to the cluster security; we move
Iustin Pop's avatar
Iustin Pop committed
143
away from Twisted and use HTTP(s) for intra- and extra-cluster
144 145 146 147 148 149 150 151 152 153 154 155 156
communications. For more details, see the security document in the
doc/ directory.

Master daemon
~~~~~~~~~~~~~

In Ganeti 2.0, we will have the following *entities*:

- the master daemon (on the master node)
- the node daemon (on all nodes)
- the command line tools (on the master node)
- the RAPI daemon (on the master node)

Iustin Pop's avatar
Iustin Pop committed
157
The master-daemon related interaction paths are:
158

159 160
- (CLI tools/RAPI daemon) and the master daemon, via the so called
  *LUXI* API
161 162
- the master daemon and the node daemons, via the node RPC

Iustin Pop's avatar
Iustin Pop committed
163 164 165 166 167 168 169
There are also some additional interaction paths for exceptional cases:

- CLI tools might access via SSH the nodes (for ``gnt-cluster copyfile``
  and ``gnt-cluster command``)
- master failover is a special case when a non-master node will SSH
  and do node-RPC calls to the current master

170
The protocol between the master daemon and the node daemons will be
Iustin Pop's avatar
Iustin Pop committed
171 172 173 174 175 176 177 178 179 180
changed from (Ganeti 1.2) Twisted PB (perspective broker) to HTTP(S),
using a simple PUT/GET of JSON-encoded messages. This is done due to
difficulties in working with the Twisted framework and its protocols
in a multithreaded environment, which we can overcome by using a
simpler stack (see the caveats section).

The protocol between the CLI/RAPI and the master daemon will be a
custom one (called *LUXI*): on a UNIX socket on the master node, with
rights restricted by filesystem permissions, the CLI/RAPI will talk to
the master daemon using JSON-encoded messages.
181 182 183 184 185 186

The operations supported over this internal protocol will be encoded
via a python library that will expose a simple API for its
users. Internally, the protocol will simply encode all objects in JSON
format and decode them on the receiver side.

Iustin Pop's avatar
Iustin Pop committed
187 188 189
For more details about the RAPI daemon see `Remote API changes`_, and
for the node daemon see `Node daemon changes`_.

190 191 192
The LUXI protocol
+++++++++++++++++

Iustin Pop's avatar
Iustin Pop committed
193 194 195 196 197 198 199 200 201 202
As described above, the protocol for making requests or queries to the
master daemon will be a UNIX-socket based simple RPC of JSON-encoded
messages.

The choice of UNIX was in order to get rid of the need of
authentication and authorisation inside Ganeti; for 2.0, the
permissions on the Unix socket itself will determine the access
rights.

We will have two main classes of operations over this API:
203 204 205 206 207

- cluster query functions
- job related functions

The cluster query functions are usually short-duration, and are the
Iustin Pop's avatar
Iustin Pop committed
208
equivalent of the ``OP_QUERY_*`` opcodes in Ganeti 1.2 (and they are
209 210 211 212 213 214 215 216 217 218
internally implemented still with these opcodes). The clients are
guaranteed to receive the response in a reasonable time via a timeout.

The job-related functions will be:

- submit job
- query job (which could also be categorized in the query-functions)
- archive job (see the job queue design doc)
- wait for job change, which allows a client to wait without polling

Iustin Pop's avatar
Iustin Pop committed
219
For more details of the actual operation list, see the `Job Queue`_.
220

Iustin Pop's avatar
Iustin Pop committed
221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241
Both requests and responses will consist of a JSON-encoded message
followed by the ``ETX`` character (ASCII decimal 3), which is not a
valid character in JSON messages and thus can serve as a message
delimiter. The contents of the messages will be a dictionary with two
fields:

:method:
  the name of the method called
:args:
  the arguments to the method, as a list (no keyword arguments allowed)

Responses will follow the same format, with the two fields being:

:success:
  a boolean denoting the success of the operation
:result:
  the actual result, or error message in case of failure

There are two special value for the result field:

- in the case that the operation failed, and this field is a list of
242 243 244 245
  length two, the client library will try to interpret is as an
  exception, the first element being the exception type and the second
  one the actual exception arguments; this will allow a simple method of
  passing Ganeti-related exception across the interface
Iustin Pop's avatar
Iustin Pop committed
246 247 248 249 250 251 252 253 254 255 256 257
- for the *WaitForChange* call (that waits on the server for a job to
  change status), if the result is equal to ``nochange`` instead of the
  usual result for this call (a list of changes), then the library will
  internally retry the call; this is done in order to differentiate
  internally between master daemon hung and job simply not changed

Users of the API that don't use the provided python library should
take care of the above two cases.


Master daemon implementation
++++++++++++++++++++++++++++
258 259 260 261 262 263 264 265 266 267 268

The daemon will be based around a main I/O thread that will wait for
new requests from the clients, and that does the setup/shutdown of the
other thread (pools).

There will two other classes of threads in the daemon:

- job processing threads, part of a thread pool, and which are
  long-lived, started at daemon startup and terminated only at shutdown
  time
- client I/O threads, which are the ones that talk the local protocol
Iustin Pop's avatar
Iustin Pop committed
269
  (LUXI) to the clients, and are short-lived
270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302

Master startup/failover
+++++++++++++++++++++++

In Ganeti 1.x there is no protection against failing over the master
to a node with stale configuration. In effect, the responsibility of
correct failovers falls on the admin. This is true both for the new
master and for when an old, offline master startup.

Since in 2.x we are extending the cluster state to cover the job queue
and have a daemon that will execute by itself the job queue, we want
to have more resilience for the master role.

The following algorithm will happen whenever a node is ready to
transition to the master role, either at startup time or at node
failover:

#. read the configuration file and parse the node list
   contained within

#. query all the nodes and make sure we obtain an agreement via
   a quorum of at least half plus one nodes for the following:

    - we have the latest configuration and job list (as
      determined by the serial number on the configuration and
      highest job ID on the job queue)

    - there is not even a single node having a newer
      configuration file

    - if we are not failing over (but just starting), the
      quorum agrees that we are the designated master

Iustin Pop's avatar
Iustin Pop committed
303 304 305
    - if any of the above is false, we prevent the current operation
      (i.e. we don't become the master)

306 307 308 309 310
#. at this point, the node transitions to the master role

#. for all the in-progress jobs, mark them as failed, with
   reason unknown or something similar (master failed, etc.)

Iustin Pop's avatar
Iustin Pop committed
311 312 313 314 315 316 317 318
Since due to exceptional conditions we could have a situation in which
no node can become the master due to inconsistent data, we will have
an override switch for the master daemon startup that will assume the
current node has the right data and will replicate all the
configuration files to the other nodes.

**Note**: the above algorithm is by no means an election algorithm; it
is a *confirmation* of the master role currently held by a node.
319 320 321 322

Logging
+++++++

Iustin Pop's avatar
Iustin Pop committed
323 324 325 326
The logging system will be switched completely to the standard python
logging module; currently it's logging-based, but exposes a different
API, which is just overhead. As such, the code will be switched over
to standard logging calls, and only the setup will be custom.
327 328 329 330 331 332 333 334

With this change, we will remove the separate debug/info/error logs,
and instead have always one logfile per daemon model:

- master-daemon.log for the master daemon
- node-daemon.log for the node daemon (this is the same as in 1.2)
- rapi-daemon.log for the RAPI daemon logs
- rapi-access.log, an additional log file for the RAPI that will be
Iustin Pop's avatar
Iustin Pop committed
335 336
  in the standard HTTP log format for possible parsing by other tools

337 338 339 340
Since the :term:`watcher` will only submit jobs to the master for
startup of the instances, its log file will contain less information
than before, mainly that it will start the instance, but not the
results.
Iustin Pop's avatar
Iustin Pop committed
341 342 343 344 345 346 347 348

Node daemon changes
+++++++++++++++++++

The only change to the node daemon is that, since we need better
concurrency, we don't process the inter-node RPC calls in the node
daemon itself, but we fork and process each request in a separate
child.
349

Iustin Pop's avatar
Iustin Pop committed
350 351
Since we don't have many calls, and we only fork (not exec), the
overhead should be minimal.
352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373

Caveats
+++++++

A discussed alternative is to keep the current individual processes
touching the cluster configuration model. The reasons we have not
chosen this approach is:

- the speed of reading and unserializing the cluster state
  today is not small enough that we can ignore it; the addition of
  the job queue will make the startup cost even higher. While this
  runtime cost is low, it can be on the order of a few seconds on
  bigger clusters, which for very quick commands is comparable to
  the actual duration of the computation itself

- individual commands would make it harder to implement a
  fire-and-forget job request, along the lines "start this
  instance but do not wait for it to finish"; it would require a
  model of backgrounding the operation and other things that are
  much better served by a daemon-based model

Another area of discussion is moving away from Twisted in this new
Iustin Pop's avatar
Iustin Pop committed
374 375
implementation. While Twisted has its advantages, there are also many
disadvantages to using it:
376 377

- first and foremost, it's not a library, but a framework; thus, if
Iustin Pop's avatar
Iustin Pop committed
378 379 380 381 382 383 384 385
  you use twisted, all the code needs to be 'twiste-ized' and written
  in an asynchronous manner, using deferreds; while this method works,
  it's not a common way to code and it requires that the entire process
  workflow is based around a single *reactor* (Twisted name for a main
  loop)
- the more advanced granular locking that we want to implement would
  require, if written in the async-manner, deep integration with the
  Twisted stack, to such an extend that business-logic is inseparable
386 387 388 389 390 391 392 393
  from the protocol coding; we felt that this is an unreasonable
  request, and that a good protocol library should allow complete
  separation of low-level protocol calls and business logic; by
  comparison, the threaded approach combined with HTTPs protocol
  required (for the first iteration) absolutely no changes from the 1.2
  code, and later changes for optimizing the inter-node RPC calls
  required just syntactic changes (e.g.  ``rpc.call_...`` to
  ``self.rpc.call_...``)
Iustin Pop's avatar
Iustin Pop committed
394 395 396 397 398 399 400 401

Another issue is with the Twisted API stability - during the Ganeti
1.x lifetime, we had to to implement many times workarounds to changes
in the Twisted version, so that for example 1.2 is able to use both
Twisted 2.x and 8.x.

In the end, since we already had an HTTP server library for the RAPI,
we just reused that for inter-node communication.
402 403 404 405 406


Granular locking
~~~~~~~~~~~~~~~~

407 408 409 410
We want to make sure that multiple operations can run in parallel on a
Ganeti Cluster. In order for this to happen we need to make sure
concurrently run operations don't step on each other toes and break the
cluster.
411 412 413

This design addresses how we are going to deal with locking so that:

Iustin Pop's avatar
Iustin Pop committed
414 415 416
- we preserve data coherency
- we prevent deadlocks
- we prevent job starvation
417

418 419 420 421 422
Reaching the maximum possible parallelism is a Non-Goal. We have
identified a set of operations that are currently bottlenecks and need
to be parallelised and have worked on those. In the future it will be
possible to address other needs, thus making the cluster more and more
parallel one step at a time.
423

Iustin Pop's avatar
Iustin Pop committed
424
This section only talks about parallelising Ganeti level operations, aka
425 426
Logical Units, and the locking needed for that. Any other
synchronization lock needed internally by the code is outside its scope.
427

Iustin Pop's avatar
Iustin Pop committed
428 429
Library details
+++++++++++++++
430 431 432

The proposed library has these features:

433 434 435 436
- internally managing all the locks, making the implementation
  transparent from their usage
- automatically grabbing multiple locks in the right order (avoid
  deadlock)
Iustin Pop's avatar
Iustin Pop committed
437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454
- ability to transparently handle conversion to more granularity
- support asynchronous operation (future goal)

Locking will be valid only on the master node and will not be a
distributed operation. Therefore, in case of master failure, the
operations currently running will be aborted and the locks will be
lost; it remains to the administrator to cleanup (if needed) the
operation result (e.g. make sure an instance is either installed
correctly or removed).

A corollary of this is that a master-failover operation with both
masters alive needs to happen while no operations are running, and
therefore no locks are held.

All the locks will be represented by objects (like
``lockings.SharedLock``), and the individual locks for each object
will be created at initialisation time, from the config file.

455 456 457
The API will have a way to grab one or more than one locks at the same
time.  Any attempt to grab a lock while already holding one in the wrong
order will be checked for, and fail.
Iustin Pop's avatar
Iustin Pop committed
458

459 460 461 462 463 464 465 466 467 468

The Locks
+++++++++

At the first stage we have decided to provide the following locks:

- One "config file" lock
- One lock per node in the cluster
- One lock per instance in the cluster

469 470 471 472 473
All the instance locks will need to be taken before the node locks, and
the node locks before the config lock. Locks will need to be acquired at
the same time for multiple instances and nodes, and internal ordering
will be dealt within the locking library, which, for simplicity, will
just use alphabetical order.
474

Iustin Pop's avatar
Iustin Pop committed
475 476 477 478 479 480
Each lock has the following three possible statuses:

- unlocked (anyone can grab the lock)
- shared (anyone can grab/have the lock but only in shared mode)
- exclusive (no one else can grab/have the lock)

481 482 483
Handling conversion to more granularity
+++++++++++++++++++++++++++++++++++++++

484 485 486 487 488 489 490
In order to convert to a more granular approach transparently each time
we split a lock into more we'll create a "metalock", which will depend
on those sub-locks and live for the time necessary for all the code to
convert (or forever, in some conditions). When a metalock exists all
converted code must acquire it in shared mode, so it can run
concurrently, but still be exclusive with old code, which acquires it
exclusively.
491

492 493 494 495
In the beginning the only such lock will be what replaces the current
"command" lock, and will acquire all the locks in the system, before
proceeding. This lock will be called the "Big Ganeti Lock" because
holding that one will avoid any other concurrent Ganeti operations.
496

497 498 499
We might also want to devise more metalocks (eg. all nodes, all
nodes+config) in order to make it easier for some parts of the code to
acquire what it needs without specifying it explicitly.
500

501 502 503 504
In the future things like the node locks could become metalocks, should
we decide to split them into an even more fine grained approach, but
this will probably be only after the first 2.0 version has been
released.
505 506 507 508

Adding/Removing locks
+++++++++++++++++++++

509 510 511
When a new instance or a new node is created an associated lock must be
added to the list. The relevant code will need to inform the locking
library of such a change.
512

513 514 515 516
This needs to be compatible with every other lock in the system,
especially metalocks that guarantee to grab sets of resources without
specifying them explicitly. The implementation of this will be handled
in the locking library itself.
517

Iustin Pop's avatar
Iustin Pop committed
518 519 520 521 522
When instances or nodes disappear from the cluster the relevant locks
must be removed. This is easier than adding new elements, as the code
which removes them must own them exclusively already, and thus deals
with metalocks exactly as normal code acquiring those locks. Any
operation queuing on a removed lock will fail after its removal.
523 524 525 526 527

Asynchronous operations
+++++++++++++++++++++++

For the first version the locking library will only export synchronous
528 529
operations, which will block till the needed lock are held, and only
fail if the request is impossible or somehow erroneous.
530 531 532 533

In the future we may want to implement different types of asynchronous
operations such as:

Iustin Pop's avatar
Iustin Pop committed
534
- try to acquire this lock set and fail if not possible
535 536
- try to acquire one of these lock sets and return the first one you
  were able to get (or after a timeout) (select/poll like)
537

538 539 540 541 542 543 544
These operations can be used to prioritize operations based on available
locks, rather than making them just blindly queue for acquiring them.
The inherent risk, though, is that any code using the first operation,
or setting a timeout for the second one, is susceptible to starvation
and thus may never be able to get the required locks and complete
certain tasks. Considering this providing/using these operations should
not be among our first priorities.
545 546 547 548 549

Locking granularity
+++++++++++++++++++

For the first version of this code we'll convert each Logical Unit to
550 551 552 553 554
acquire/release the locks it needs, so locking will be at the Logical
Unit level.  In the future we may want to split logical units in
independent "tasklets" with their own locking requirements. A different
design doc (or mini design doc) will cover the move from Logical Units
to tasklets.
555

Iustin Pop's avatar
Iustin Pop committed
556 557
Code examples
+++++++++++++
558

559 560
In general when acquiring locks we should use a code path equivalent
to::
561 562 563 564 565 566 567 568

  lock.acquire()
  try:
    ...
    # other code
  finally:
    lock.release()

Iustin Pop's avatar
Iustin Pop committed
569 570 571 572 573
This makes sure we release all locks, and avoid possible deadlocks. Of
course extra care must be used not to leave, if possible locked
structures in an unusable state. Note that with Python 2.5 a simpler
syntax will be possible, but we want to keep compatibility with Python
2.4 so the new constructs should not be used.
574

575 576 577 578
In order to avoid this extra indentation and code changes everywhere in
the Logical Units code, we decided to allow LUs to declare locks, and
then execute their code with their locks acquired. In the new world LUs
are called like this::
579 580 581 582 583 584 585 586 587 588 589 590 591 592

  # user passed names are expanded to the internal lock/resource name,
  # then known needed locks are declared
  lu.ExpandNames()
  ... some locking/adding of locks may happen ...
  # late declaration of locks for one level: this is useful because sometimes
  # we can't know which resource we need before locking the previous level
  lu.DeclareLocks() # for each level (cluster, instance, node)
  ... more locking/adding of locks can happen ...
  # these functions are called with the proper locks held
  lu.CheckPrereq()
  lu.Exec()
  ... locks declared for removal are removed, all acquired locks released ...

593 594
The Processor and the LogicalUnit class will contain exact documentation
on how locks are supposed to be declared.
595 596 597 598 599 600

Caveats
+++++++

This library will provide an easy upgrade path to bring all the code to
granular locking without breaking everything, and it will also guarantee
601 602 603 604
against a lot of common errors. Code switching from the old "lock
everything" lock to the new system, though, needs to be carefully
scrutinised to be sure it is really acquiring all the necessary locks,
and none has been overlooked or forgotten.
605

606 607 608 609
The code can contain other locks outside of this library, to synchronise
other threaded code (eg for the job queue) but in general these should
be leaf locks or carefully structured non-leaf ones, to avoid deadlock
race conditions.
610 611 612 613 614 615 616 617 618


Job Queue
~~~~~~~~~

Granular locking is not enough to speed up operations, we also need a
queue to store these and to be able to process as many as possible in
parallel.

Iustin Pop's avatar
Iustin Pop committed
619
A Ganeti job will consist of multiple ``OpCodes`` which are the basic
620 621 622 623 624 625 626 627 628
element of operation in Ganeti 1.2 (and will remain as such). Most
command-level commands are equivalent to one OpCode, or in some cases
to a sequence of opcodes, all of the same type (e.g. evacuating a node
will generate N opcodes of type replace disks).


Job execution—“Life of a Ganeti job
++++++++++++++++++++++++++++++++++++

629 630 631 632 633 634 635 636 637 638 639 640
#. Job gets submitted by the client. A new job identifier is generated
   and assigned to the job. The job is then automatically replicated
   [#replic]_ to all nodes in the cluster. The identifier is returned to
   the client.
#. A pool of worker threads waits for new jobs. If all are busy, the job
   has to wait and the first worker finishing its work will grab it.
   Otherwise any of the waiting threads will pick up the new job.
#. Client waits for job status updates by calling a waiting RPC
   function. Log message may be shown to the user. Until the job is
   started, it can also be canceled.
#. As soon as the job is finished, its final result and status can be
   retrieved from the server.
641 642 643
#. If the client archives the job, it gets moved to a history directory.
   There will be a method to archive all jobs older than a a given age.

644 645 646 647 648
.. [#replic] We need replication in order to maintain the consistency
   across all nodes in the system; the master node only differs in the
   fact that now it is running the master daemon, but it if fails and we
   do a master failover, the jobs are still visible on the new master
   (though marked as failed).
649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669

Failures to replicate a job to other nodes will be only flagged as
errors in the master daemon log if more than half of the nodes failed,
otherwise we ignore the failure, and rely on the fact that the next
update (for still running jobs) will retry the update. For finished
jobs, it is less of a problem.

Future improvements will look into checking the consistency of the job
list and jobs themselves at master daemon startup.


Job storage
+++++++++++

Jobs are stored in the filesystem as individual files, serialized
using JSON (standard serialization mechanism in Ganeti).

The choice of storing each job in its own file was made because:

- a file can be atomically replaced
- a file can easily be replicated to other nodes
670 671
- checking consistency across nodes can be implemented very easily,
  since all job files should be (at a given moment in time) identical
672 673 674

The other possible choices that were discussed and discounted were:

675 676
- single big file with all job data: not feasible due to difficult
  updates
677
- in-process databases: hard to replicate the entire database to the
678 679
  other nodes, and replicating individual operations does not mean wee
  keep consistency
680 681 682 683 684


Queue structure
+++++++++++++++

685 686 687
All file operations have to be done atomically by writing to a temporary
file and subsequent renaming. Except for log messages, every change in a
job is stored and replicated to other nodes.
688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704

::

  /var/lib/ganeti/queue/
    job-1 (JSON encoded job description and status)
    []
    job-37
    job-38
    job-39
    lock (Queue managing process opens this file in exclusive mode)
    serial (Last job ID used)
    version (Queue format version)


Locking
+++++++

705 706 707
Locking in the job queue is a complicated topic. It is called from more
than one thread and must be thread-safe. For simplicity, a single lock
is used for the whole job queue.
708

709
A more detailed description can be found in doc/locking.rst.
710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727


Internal RPC
++++++++++++

RPC calls available between Ganeti master and node daemons:

jobqueue_update(file_name, content)
  Writes a file in the job queue directory.
jobqueue_purge()
  Cleans the job queue directory completely, including archived job.
jobqueue_rename(old, new)
  Renames a file in the job queue directory.


Client RPC
++++++++++

728 729
RPC between Ganeti clients and the Ganeti master daemon supports the
following operations:
730 731

SubmitJob(ops)
732 733 734
  Submits a list of opcodes and returns the job identifier. The
  identifier is guaranteed to be unique during the lifetime of a
  cluster.
735
WaitForJobChange(job_id, fields, [], timeout)
736 737 738
  This function waits until a job changes or a timeout expires. The
  condition for when a job changed is defined by the fields passed and
  the last log message received.
739 740 741
QueryJobs(job_ids, fields)
  Returns field values for the job identifiers passed.
CancelJob(job_id)
742 743
  Cancels the job specified by identifier. This operation may fail if
  the job is already running, canceled or finished.
744
ArchiveJob(job_id)
745 746
  Moves a job into the /archive/ directory. This operation will fail if
  the job has not been canceled or finished.
747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766


Job and opcode status
+++++++++++++++++++++

Each job and each opcode has, at any time, one of the following states:

Queued
  The job/opcode was submitted, but did not yet start.
Waiting
  The job/opcode is waiting for a lock to proceed.
Running
  The job/opcode is running.
Canceled
  The job/opcode was canceled before it started.
Success
  The job/opcode ran and finished successfully.
Error
  The job/opcode was aborted with an error.

767 768
If the master is aborted while a job is running, the job will be set to
the Error status once the master started again.
769 770 771 772 773 774


History
+++++++

Archived jobs are kept in a separate directory,
Iustin Pop's avatar
Iustin Pop committed
775 776 777 778
``/var/lib/ganeti/queue/archive/``.  This is done in order to speed up
the queue handling: by default, the jobs in the archive are not
touched by any functions. Only the current (unarchived) jobs are
parsed, loaded, and verified (if implemented) by the master daemon.
779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817


Ganeti updates
++++++++++++++

The queue has to be completely empty for Ganeti updates with changes
in the job queue structure. In order to allow this, there will be a
way to prevent new jobs entering the queue.


Object parameters
~~~~~~~~~~~~~~~~~

Across all cluster configuration data, we have multiple classes of
parameters:

A. cluster-wide parameters (e.g. name of the cluster, the master);
   these are the ones that we have today, and are unchanged from the
   current model

#. node parameters

#. instance specific parameters, e.g. the name of disks (LV), that
   cannot be shared with other instances

#. instance parameters, that are or can be the same for many
   instances, but are not hypervisor related; e.g. the number of VCPUs,
   or the size of memory

#. instance parameters that are hypervisor specific (e.g. kernel_path
   or PAE mode)


The following definitions for instance parameters will be used below:

:hypervisor parameter:
  a hypervisor parameter (or hypervisor specific parameter) is defined
  as a parameter that is interpreted by the hypervisor support code in
  Ganeti and usually is specific to a particular hypervisor (like the
818
  kernel path for :term:`PVM` which makes no sense for :term:`HVM`).
819 820 821 822 823 824 825 826 827

:backend parameter:
  a backend parameter is defined as an instance parameter that can be
  shared among a list of instances, and is either generic enough not
  to be tied to a given hypervisor or cannot influence at all the
  hypervisor behaviour.

  For example: memory, vcpus, auto_balance

828 829 830
  All these parameters will be encoded into constants.py with the prefix
  "BE\_" and the whole list of parameters will exist in the set
  "BES_PARAMETERS"
831 832

:proper parameter:
833 834
  a parameter whose value is unique to the instance (e.g. the name of a
  LV, or the MAC of a NIC)
835 836 837 838 839 840 841 842 843 844 845 846 847 848

As a general rule, for all kind of parameters, None (or in
JSON-speak, nil) will no longer be a valid value for a parameter. As
such, only non-default parameters will be saved as part of objects in
the serialization step, reducing the size of the serialized format.

Cluster parameters
++++++++++++++++++

Cluster parameters remain as today, attributes at the top level of the
Cluster object. In addition, two new attributes at this level will
hold defaults for the instances:

- hvparams, a dictionary indexed by hypervisor type, holding default
Iustin Pop's avatar
Iustin Pop committed
849
  values for hypervisor parameters that are not defined/overridden by
850 851 852 853 854 855 856 857 858 859 860
  the instances of this hypervisor type

- beparams, a dictionary holding (for 2.0) a single element 'default',
  which holds the default value for backend parameters

Node parameters
+++++++++++++++

Node-related parameters are very few, and we will continue using the
same model for these as previously (attributes on the Node object).

861 862 863
There are three new node flags, described in a separate section "node
flags" below.

864 865 866 867 868 869 870 871 872 873 874 875 876 877 878
Instance parameters
+++++++++++++++++++

As described before, the instance parameters are split in three:
instance proper parameters, unique to each instance, instance
hypervisor parameters and instance backend parameters.

The hvparams and beparams are kept in two dictionaries at instance
level. Only non-default parameters are stored (but once customized, a
parameter will be kept, even with the same value as the default one,
until reset).

The names for hypervisor parameters in the instance.hvparams subtree
should be choosen as generic as possible, especially if specific
parameters could conceivably be useful for more than one hypervisor,
Iustin Pop's avatar
Iustin Pop committed
879 880 881
e.g. ``instance.hvparams.vnc_console_port`` instead of using both
``instance.hvparams.hvm_vnc_console_port`` and
``instance.hvparams.kvm_vnc_console_port``.
882 883

There are some special cases related to disks and NICs (for example):
Iustin Pop's avatar
Iustin Pop committed
884
a disk has both Ganeti-related parameters (e.g. the name of the LV)
885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930
and hypervisor-related parameters (how the disk is presented to/named
in the instance). The former parameters remain as proper-instance
parameters, while the latter value are migrated to the hvparams
structure. In 2.0, we will have only globally-per-instance such
hypervisor parameters, and not per-disk ones (e.g. all NICs will be
exported as of the same type).

Starting from the 1.2 list of instance parameters, here is how they
will be mapped to the three classes of parameters:

- name (P)
- primary_node (P)
- os (P)
- hypervisor (P)
- status (P)
- memory (BE)
- vcpus (BE)
- nics (P)
- disks (P)
- disk_template (P)
- network_port (P)
- kernel_path (HV)
- initrd_path (HV)
- hvm_boot_order (HV)
- hvm_acpi (HV)
- hvm_pae (HV)
- hvm_cdrom_image_path (HV)
- hvm_nic_type (HV)
- hvm_disk_type (HV)
- vnc_bind_address (HV)
- serial_no (P)


Parameter validation
++++++++++++++++++++

To support the new cluster parameter design, additional features will
be required from the hypervisor support implementations in Ganeti.

The hypervisor support  implementation API will be extended with the
following features:

:PARAMETERS: class-level attribute holding the list of valid parameters
  for this hypervisor
:CheckParamSyntax(hvparams): checks that the given parameters are
  valid (as in the names are valid) for this hypervisor; usually just
Iustin Pop's avatar
Iustin Pop committed
931 932 933
  comparing ``hvparams.keys()`` and ``cls.PARAMETERS``; this is a class
  method that can be called from within master code (i.e. cmdlib) and
  should be safe to do so
934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950
:ValidateParameters(hvparams): verifies the values of the provided
  parameters against this hypervisor; this is a method that will be
  called on the target node, from backend.py code, and as such can
  make node-specific checks (e.g. kernel_path checking)

Default value application
+++++++++++++++++++++++++

The application of defaults to an instance is done in the Cluster
object, via two new methods as follows:

- ``Cluster.FillHV(instance)``, returns 'filled' hvparams dict, based on
  instance's hvparams and cluster's ``hvparams[instance.hypervisor]``

- ``Cluster.FillBE(instance, be_type="default")``, which returns the
  beparams dict, based on the instance and cluster beparams

951 952 953 954
The FillHV/BE transformations will be used, for example, in the
RpcRunner when sending an instance for activation/stop, and the sent
instance hvparams/beparams will have the final value (noded code doesn't
know about defaults).
955 956 957 958 959 960 961 962 963

LU code will need to self-call the transformation, if needed.

Opcode changes
++++++++++++++

The parameter changes will have impact on the OpCodes, especially on
the following ones:

964 965 966
- ``OpCreateInstance``, where the new hv and be parameters will be sent
  as dictionaries; note that all hv and be parameters are now optional,
  as the values can be instead taken from the cluster
Iustin Pop's avatar
Iustin Pop committed
967
- ``OpQueryInstances``, where we have to be able to query these new
968 969 970 971
  parameters; the syntax for names will be ``hvparam/$NAME`` and
  ``beparam/$NAME`` for querying an individual parameter out of one
  dictionary, and ``hvparams``, respectively ``beparams``, for the whole
  dictionaries
Iustin Pop's avatar
Iustin Pop committed
972
- ``OpModifyInstance``, where the the modified parameters are sent as
973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998
  dictionaries

Additionally, we will need new OpCodes to modify the cluster-level
defaults for the be/hv sets of parameters.

Caveats
+++++++

One problem that might appear is that our classification is not
complete or not good enough, and we'll need to change this model. As
the last resort, we will need to rollback and keep 1.2 style.

Another problem is that classification of one parameter is unclear
(e.g. ``network_port``, is this BE or HV?); in this case we'll take
the risk of having to move parameters later between classes.

Security
++++++++

The only security issue that we foresee is if some new parameters will
have sensitive value. If so, we will need to have a way to export the
config data while purging the sensitive value.

E.g. for the drbd shared secrets, we could export these with the
values replaced by an empty string.

999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111
Node flags
~~~~~~~~~~

Ganeti 2.0 adds three node flags that change the way nodes are handled
within Ganeti and the related infrastructure (iallocator interaction,
RAPI data export).

*master candidate* flag
+++++++++++++++++++++++

Ganeti 2.0 allows more scalability in operation by introducing
parallelization. However, a new bottleneck is reached that is the
synchronization and replication of cluster configuration to all nodes
in the cluster.

This breaks scalability as the speed of the replication decreases
roughly with the size of the nodes in the cluster. The goal of the
master candidate flag is to change this O(n) into O(1) with respect to
job and configuration data propagation.

Only nodes having this flag set (let's call this set of nodes the
*candidate pool*) will have jobs and configuration data replicated.

The cluster will have a new parameter (runtime changeable) called
``candidate_pool_size`` which represents the number of candidates the
cluster tries to maintain (preferably automatically).

This will impact the cluster operations as follows:

- jobs and config data will be replicated only to a fixed set of nodes
- master fail-over will only be possible to a node in the candidate pool
- cluster verify needs changing to account for these two roles
- external scripts will no longer have access to the configuration
  file (this is not recommended anyway)


The caveats of this change are:

- if all candidates are lost (completely), cluster configuration is
  lost (but it should be backed up external to the cluster anyway)

- failed nodes which are candidate must be dealt with properly, so
  that we don't lose too many candidates at the same time; this will be
  reported in cluster verify

- the 'all equal' concept of ganeti is no longer true

- the partial distribution of config data means that all nodes will
  have to revert to ssconf files for master info (as in 1.2)

Advantages:

- speed on a 100+ nodes simulated cluster is greatly enhanced, even
  for a simple operation; ``gnt-instance remove`` on a diskless instance
  remove goes from ~9seconds to ~2 seconds

- node failure of non-candidates will be less impacting on the cluster

The default value for the candidate pool size will be set to 10 but
this can be changed at cluster creation and modified any time later.

Testing on simulated big clusters with sequential and parallel jobs
show that this value (10) is a sweet-spot from performance and load
point of view.

*offline* flag
++++++++++++++

In order to support better the situation in which nodes are offline
(e.g. for repair) without altering the cluster configuration, Ganeti
needs to be told and needs to properly handle this state for nodes.

This will result in simpler procedures, and less mistakes, when the
amount of node failures is high on an absolute scale (either due to
high failure rate or simply big clusters).

Nodes having this attribute set will not be contacted for inter-node
RPC calls, will not be master candidates, and will not be able to host
instances as primaries.

Setting this attribute on a node:

- will not be allowed if the node is the master
- will not be allowed if the node has primary instances
- will cause the node to be demoted from the master candidate role (if
  it was), possibly causing another node to be promoted to that role

This attribute will impact the cluster operations as follows:

- querying these nodes for anything will fail instantly in the RPC
  library, with a specific RPC error (RpcResult.offline == True)

- they will be listed in the Other section of cluster verify

The code is changed in the following ways:

- RPC calls were be converted to skip such nodes:

  - RpcRunner-instance-based RPC calls are easy to convert

  - static/classmethod RPC calls are harder to convert, and were left
    alone

- the RPC results were unified so that this new result state (offline)
  can be differentiated

- master voting still queries in repair nodes, as we need to ensure
  consistency in case the (wrong) masters have old data, and nodes have
  come back from repairs

Caveats:

- some operation semantics are less clear (e.g. what to do on instance
1112 1113
  start with offline secondary?); for now, these will just fail as if
  the flag is not set (but faster)
1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151
- 2-node cluster with one node offline needs manual startup of the
  master with a special flag to skip voting (as the master can't get a
  quorum there)

One of the advantages of implementing this flag is that it will allow
in the future automation tools to automatically put the node in
repairs and recover from this state, and the code (should/will) handle
this much better than just timing out. So, future possible
improvements (for later versions):

- watcher will detect nodes which fail RPC calls, will attempt to ssh
  to them, if failure will put them offline
- watcher will try to ssh and query the offline nodes, if successful
  will take them off the repair list

Alternatives considered: The RPC call model in 2.0 is, by default,
much nicer - errors are logged in the background, and job/opcode
execution is clearer, so we could simply not introduce this. However,
having this state will make both the codepaths clearer (offline
vs. temporary failure) and the operational model (it's not a node with
errors, but an offline node).


*drained* flag
++++++++++++++

Due to parallel execution of jobs in Ganeti 2.0, we could have the
following situation:

- gnt-node migrate + failover is run
- gnt-node evacuate is run, which schedules a long-running 6-opcode
  job for the node
- partway through, a new job comes in that runs an iallocator script,
  which finds the above node as empty and a very good candidate
- gnt-node evacuate has finished, but now it has to be run again, to
  clean the above instance(s)

In order to prevent this situation, and to be able to get nodes into
1152 1153
proper offline status easily, a new *drained* flag was added to the
nodes.
1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175

This flag (which actually means "is being, or was drained, and is
expected to go offline"), will prevent allocations on the node, but
otherwise all other operations (start/stop instance, query, etc.) are
working without any restrictions.

Interaction between flags
+++++++++++++++++++++++++

While these flags are implemented as separate flags, they are
mutually-exclusive and are acting together with the master node role
as a single *node status* value. In other words, a flag is only in one
of these roles at a given time. The lack of any of these flags denote
a regular node.

The current node status is visible in the ``gnt-cluster verify``
output, and the individual flags can be examined via separate flags in
the ``gnt-node list`` output.

These new flags will be exported in both the iallocator input message
and via RAPI, see the respective man pages for the exact names.

1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192
Feature changes
---------------

The main feature-level changes will be:

- a number of disk related changes
- removal of fixed two-disk, one-nic per instance limitation

Disk handling changes
~~~~~~~~~~~~~~~~~~~~~

The storage options available in Ganeti 1.x were introduced based on
then-current software (first DRBD 0.7 then later DRBD 8) and the
estimated usage patters. However, experience has later shown that some
assumptions made initially are not true and that more flexibility is
needed.

1193 1194 1195
One main assumption made was that disk failures should be treated as
'rare' events, and that each of them needs to be manually handled in
order to ensure data safety; however, both these assumptions are false:
1196

1197 1198 1199 1200
- disk failures can be a common occurrence, based on usage patterns or
  cluster size
- our disk setup is robust enough (referring to DRBD8 + LVM) that we
  could automate more of the recovery
1201

1202 1203
Note that we still don't have fully-automated disk recovery as a goal,
but our goal is to reduce the manual work needed.
1204 1205 1206

As such, we plan the following main changes:

1207 1208 1209
- DRBD8 is much more flexible and stable than its previous version
  (0.7), such that removing the support for the ``remote_raid1``
  template and focusing only on DRBD8 is easier
1210

1211 1212 1213 1214 1215
- dynamic discovery of DRBD devices is not actually needed in a cluster
  that where the DRBD namespace is controlled by Ganeti; switching to a
  static assignment (done at either instance creation time or change
  secondary time) will change the disk activation time from O(n) to
  O(1), which on big clusters is a significant gain
1216

1217 1218 1219
- remove the hard dependency on LVM (currently all available storage
  types are ultimately backed by LVM volumes) by introducing file-based
  storage
1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258

Additionally, a number of smaller enhancements are also planned:
- support variable number of disks
- support read-only disks

Future enhancements in the 2.x series, which do not require base design
changes, might include:

- enhancement of the LVM allocation method in order to try to keep
  all of an instance's virtual disks on the same physical
  disks

- add support for DRBD8 authentication at handshake time in
  order to ensure each device connects to the correct peer

- remove the restrictions on failover only to the secondary
  which creates very strict rules on cluster allocation

DRBD minor allocation
+++++++++++++++++++++

Currently, when trying to identify or activate a new DRBD (or MD)
device, the code scans all in-use devices in order to see if we find
one that looks similar to our parameters and is already in the desired
state or not. Since this needs external commands to be run, it is very
slow when more than a few devices are already present.

Therefore, we will change the discovery model from dynamic to
static. When a new device is logically created (added to the
configuration) a free minor number is computed from the list of
devices that should exist on that node and assigned to that
device.

At device activation, if the minor is already in use, we check if
it has our parameters; if not so, we just destroy the device (if
possible, otherwise we abort) and start it with our own
parameters.

This means that we in effect take ownership of the minor space for
Iustin Pop's avatar
Iustin Pop committed
1259
that device type; if there's a user-created DRBD minor, it will be
1260 1261 1262 1263 1264 1265
automatically removed.

The change will have the effect of reducing the number of external
commands run per device from a constant number times the index of the
first free DRBD minor to just a constant number.

Iustin Pop's avatar
Iustin Pop committed
1266
Removal of obsolete device types (MD, DRBD7)
1267 1268 1269
++++++++++++++++++++++++++++++++++++++++++++

We need to remove these device types because of two issues. First,
Iustin Pop's avatar
Iustin Pop committed
1270
DRBD7 has bad failure modes in case of dual failures (both network and
1271
disk - it cannot propagate the error up the device stack and instead
Iustin Pop's avatar
Iustin Pop committed
1272 1273 1274
just panics. Second, due to the asymmetry between primary and
secondary in MD+DRBD mode, we cannot do live failover (not even if we
had MD+DRBD8).
1275 1276 1277 1278

File-based storage support
++++++++++++++++++++++++++

Iustin Pop's avatar
Iustin Pop committed
1279 1280 1281 1282
Using files instead of logical volumes for instance storage would
allow us to get rid of the hard requirement for volume groups for
testing clusters and it would also allow usage of SAN storage to do
live failover taking advantage of this storage solution.
1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332

Better LVM allocation
+++++++++++++++++++++

Currently, the LV to PV allocation mechanism is a very simple one: at
each new request for a logical volume, tell LVM to allocate the volume
in order based on the amount of free space. This is good for
simplicity and for keeping the usage equally spread over the available
physical disks, however it introduces a problem that an instance could
end up with its (currently) two drives on two physical disks, or
(worse) that the data and metadata for a DRBD device end up on
different drives.

This is bad because it causes unneeded ``replace-disks`` operations in
case of a physical failure.

The solution is to batch allocations for an instance and make the LVM
handling code try to allocate as close as possible all the storage of
one instance. We will still allow the logical volumes to spill over to
additional disks as needed.

Note that this clustered allocation can only be attempted at initial
instance creation, or at change secondary node time. At add disk time,
or at replacing individual disks, it's not easy enough to compute the
current disk map so we'll not attempt the clustering.

DRBD8 peer authentication at handshake
++++++++++++++++++++++++++++++++++++++

DRBD8 has a new feature that allow authentication of the peer at
connect time. We can use this to prevent connecting to the wrong peer
more that securing the connection. Even though we never had issues
with wrong connections, it would be good to implement this.


LVM self-repair (optional)
++++++++++++++++++++++++++

The complete failure of a physical disk is very tedious to
troubleshoot, mainly because of the many failure modes and the many
steps needed. We can safely automate some of the steps, more
specifically the ``vgreduce --removemissing`` using the following
method:

#. check if all nodes have consistent volume groups
#. if yes, and previous status was yes, do nothing
#. if yes, and previous status was no, save status and restart
#. if no, and previous status was no, do nothing
#. if no, and previous status was yes:
    #. if more than one node is inconsistent, do nothing
Iustin Pop's avatar
Iustin Pop committed
1333
    #. if only one node is inconsistent:
1334
        #. run ``vgreduce --removemissing``
Iustin Pop's avatar
Iustin Pop committed
1335
        #. log this occurrence in the Ganeti log in a form that
1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346
           can be used for monitoring
        #. [FUTURE] run ``replace-disks`` for all
           instances affected

Failover to any node
++++++++++++++++++++

With a modified disk activation sequence, we can implement the
*failover to any* functionality, removing many of the layout
restrictions of a cluster:

1347 1348
- the need to reserve memory on the current secondary: this gets reduced
  to a must to reserve memory anywhere on the cluster
1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360

- the need to first failover and then replace secondary for an
  instance: with failover-to-any, we can directly failover to
  another node, which also does the replace disks at the same
  step

In the following, we denote the current primary by P1, the current
secondary by S1, and the new primary and secondaries by P2 and S2. P2
is fixed to the node the user chooses, but the choice of S2 can be
made between P1 and S1. This choice can be constrained, depending on
which of P1 and S1 has failed.

1361 1362
- if P1 has failed, then S1 must become S2, and live migration is not
  possible
1363 1364 1365 1366 1367 1368 1369 1370
- if S1 has failed, then P1 must become S2, and live migration could be
  possible (in theory, but this is not a design goal for 2.0)

The algorithm for performing the failover is straightforward:

- verify that S2 (the node the user has chosen to keep as secondary) has
  valid data (is consistent)

1371 1372 1373
- tear down the current DRBD association and setup a DRBD pairing
  between P2 (P2 is indicated by the user) and S2; since P2 has no data,
  it will start re-syncing from S2
1374

1375 1376 1377
- as soon as P2 is in state SyncTarget (i.e. after the resync has
  started but before it has finished), we can promote it to primary role
  (r/w) and start the instance on P2
1378 1379 1380 1381 1382 1383 1384 1385 1386

- as soon as the P2?S2 sync has finished, we can remove
  the old data on the old node that has not been chosen for
  S2

Caveats: during the P2?S2 sync, a (non-transient) network error
will cause I/O errors on the instance, so (if a longer instance
downtime is acceptable) we can postpone the restart of the instance
until the resync is done. However, disk I/O errors on S2 will cause
Iustin Pop's avatar
Iustin Pop committed
1387
data loss, since we don't have a good copy of the data anymore, so in
1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399
this case waiting for the sync to complete is not an option. As such,
it is recommended that this feature is used only in conjunction with
proper disk monitoring.


Live migration note: While failover-to-any is possible for all choices
of S2, migration-to-any is possible only if we keep P1 as S2.

Caveats
+++++++

The dynamic device model, while more complex, has an advantage: it
Iustin Pop's avatar
Iustin Pop committed
1400 1401
will not reuse by mistake the DRBD device of another instance, since
it always looks for either our own or a free one.
1402 1403 1404 1405 1406

The static one, in contrast, will assume that given a minor number N,
it's ours and we can take over. This needs careful implementation such
that if the minor is in use, either we are able to cleanly shut it
down, or we abort the startup. Otherwise, it could be that we start
Iustin Pop's avatar
Iustin Pop committed
1407
syncing between two instance's disks, causing data loss.
1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418


Variable number of disk/NICs per instance
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Variable number of disks
++++++++++++++++++++++++

In order to support high-security scenarios (for example read-only sda
and read-write sdb), we need to make a fully flexibly disk
definition. This has less impact that it might look at first sight:
Iustin Pop's avatar
Iustin Pop committed
1419
only the instance creation has hard coded number of disks, not the disk
1420 1421 1422 1423 1424 1425 1426
handling code. The block device handling and most of the instance
handling code is already working with "the instance's disks" as
opposed to "the two disks of the instance", but some pieces are not
(e.g. import/export) and the code needs a review to ensure safety.

The objective is to be able to specify the number of disks at
instance creation, and to be able to toggle from read-only to
Iustin Pop's avatar
Iustin Pop committed
1427
read-write a disk afterward.
1428 1429 1430 1431 1432 1433 1434

Variable number of NICs
+++++++++++++++++++++++

Similar to the disk change, we need to allow multiple network
interfaces per instance. This will affect the internal code (some
function will have to stop assuming that ``instance.nics`` is a list
Iustin Pop's avatar
Iustin Pop committed
1435
of length one), the OS API which currently can export/import only one
1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447
instance, and the command line interface.

Interface changes
-----------------

There are two areas of interface changes: API-level changes (the OS
interface and the RAPI interface) and the command line interface
changes.

OS interface
~~~~~~~~~~~~

1448 1449 1450 1451
The current Ganeti OS interface, version 5, is tailored for Ganeti 1.2.
The interface is composed by a series of scripts which get called with
certain parameters to perform OS-dependent operations on the cluster.
The current scripts are:
1452 1453 1454 1455 1456 1457 1458 1459 1460 1461 1462

create
  called when a new instance is added to the cluster
export
  called to export an instance disk to a stream
import
  called to import from a stream to a new instance
rename
  called to perform the os-specific operations necessary for renaming an
  instance

1463 1464 1465 1466 1467 1468
Currently these scripts suffer from the limitations of Ganeti 1.2: for
example they accept exactly one block and one swap devices to operate
on, rather than any amount of generic block devices, they blindly assume
that an instance will have just one network interface to operate, they
can not be configured to optimise the instance for a particular
hypervisor.
1469

1470 1471 1472 1473 1474 1475 1476
Since in Ganeti 2.0 we want to support multiple hypervisors, and a
non-fixed number of network and disks the OS interface need to change to
transmit the appropriate amount of information about an instance to its
managing operating system, when operating on it. Moreover since some old
assumptions usually used in OS scripts are no longer valid we need to
re-establish a common knowledge on what can be assumed and what cannot
be regarding Ganeti environment.
1477 1478 1479 1480 1481


When designing the new OS API our priorities are:
- ease of use
- future extensibility
Iustin Pop's avatar
Iustin Pop committed
1482
- ease of porting from the old API
1483 1484
- modularity

1485 1486 1487 1488 1489 1490 1491
As such we want to limit the number of scripts that must be written to
support an OS, and make it easy to share code between them by uniforming
their input.  We also will leave the current script structure unchanged,
as far as we can, and make a few of the scripts (import, export and
rename) optional. Most information will be passed to the script through
environment variables, for ease of access and at the same time ease of
using only the information a script needs.
1492 1493 1494 1495 1496


The Scripts
+++++++++++

1497