Skip to content
Snippets Groups Projects
  • Michael Hanselmann's avatar
    Merge branch 'devel-2.4' · b795a775
    Michael Hanselmann authored
    
    * devel-2.4:
      ht: Add new check for numbers
      Fix off-by-one bug in job serial generation
      Shorten some unbreakable lines in man pages
      Correct some spelling mistakes
      Fix bug in recreate-disks for DRBD instances
      Fix a lint warning
      KVM: configure bridged NICs at migration start
      Fix RAPI documentation regarding master role
      Fix bug in drbd8 replace disks on current nodes
    
    Conflicts:
    	lib/cmdlib.py: Trivial
    	lib/opcodes.py: Trivial
    
    Signed-off-by: default avatarMichael Hanselmann <hansmi@google.com>
    Reviewed-by: default avatarIustin Pop <iustin@google.com>
    b795a775
design-2.0.rst 75.33 KiB

Ganeti 2.0 design

This document describes the major changes in Ganeti 2.0 compared to the 1.2 version.

The 2.0 version will constitute a rewrite of the 'core' architecture, paving the way for additional features in future 2.x versions.

Objective

Ganeti 1.2 has many scalability issues and restrictions due to its roots as software for managing small and 'static' clusters.

Version 2.0 will attempt to remedy first the scalability issues and then the restrictions.

Background

While Ganeti 1.2 is usable, it severely limits the flexibility of the cluster administration and imposes a very rigid model. It has the following main scalability issues:

  • only one operation at a time on the cluster [1]
  • poor handling of node failures in the cluster
  • mixing hypervisors in a cluster not allowed

It also has a number of artificial restrictions, due to historical design:

  • fixed number of disks (two) per instance
  • fixed number of NICs
[1] Replace disks will release the lock, but this is an exception and not a recommended way to operate

The 2.0 version is intended to address some of these problems, and create a more flexible code base for future developments.

Among these problems, the single-operation at a time restriction is biggest issue with the current version of Ganeti. It is such a big impediment in operating bigger clusters that many times one is tempted to remove the lock just to do a simple operation like start instance while an OS installation is running.

Scalability problems

Ganeti 1.2 has a single global lock, which is used for all cluster operations. This has been painful at various times, for example:

  • It is impossible for two people to efficiently interact with a cluster (for example for debugging) at the same time.
  • When batch jobs are running it's impossible to do other work (for example failovers/fixes) on a cluster.

This poses scalability problems: as clusters grow in node and instance size it's a lot more likely that operations which one could conceive should run in parallel (for example because they happen on different nodes) are actually stalling each other while waiting for the global lock, without a real reason for that to happen.

One of the main causes of this global lock (beside the higher difficulty of ensuring data consistency in a more granular lock model) is the fact that currently there is no long-lived process in Ganeti that can coordinate multiple operations. Each command tries to acquire the so called cmd lock and when it succeeds, it takes complete ownership of the cluster configuration and state.

Other scalability problems are due the design of the DRBD device model, which assumed at its creation a low (one to four) number of instances per node, which is no longer true with today's hardware.

Artificial restrictions

Ganeti 1.2 (and previous versions) have a fixed two-disks, one-NIC per instance model. This is a purely artificial restrictions, but it touches multiple areas (configuration, import/export, command line) that it's more fitted to a major release than a minor one.

Architecture issues

The fact that each command is a separate process that reads the cluster state, executes the command, and saves the new state is also an issue on big clusters where the configuration data for the cluster begins to be non-trivial in size.

Overview

In order to solve the scalability problems, a rewrite of the core design of Ganeti is required. While the cluster operations themselves won't change (e.g. start instance will do the same things, the way these operations are scheduled internally will change radically.

The new design will change the cluster architecture to:

arch-2.0.png

This differs from the 1.2 architecture by the addition of the master daemon, which will be the only entity to talk to the node daemons.

Detailed design

The changes for 2.0 can be split into roughly three areas:

  • core changes that affect the design of the software
  • features (or restriction removals) but which do not have a wide impact on the design
  • user-level and API-level changes which translate into differences for the operation of the cluster

Core changes

The main changes will be switching from a per-process model to a daemon based model, where the individual gnt-* commands will be clients that talk to this daemon (see Master daemon). This will allow us to get rid of the global cluster lock for most operations, having instead a per-object lock (see Granular locking). Also, the daemon will be able to queue jobs, and this will allow the individual clients to submit jobs without waiting for them to finish, and also see the result of old requests (see Job Queue).

Beside these major changes, another 'core' change but that will not be as visible to the users will be changing the model of object attribute storage, and separate that into name spaces (such that an Xen PVM instance will not have the Xen HVM parameters). This will allow future flexibility in defining additional parameters. For more details see Object parameters.

The various changes brought in by the master daemon model and the read-write RAPI will require changes to the cluster security; we move away from Twisted and use HTTP(s) for intra- and extra-cluster communications. For more details, see the security document in the doc/ directory.

Master daemon

In Ganeti 2.0, we will have the following entities:

  • the master daemon (on the master node)
  • the node daemon (on all nodes)
  • the command line tools (on the master node)
  • the RAPI daemon (on the master node)

The master-daemon related interaction paths are:

  • (CLI tools/RAPI daemon) and the master daemon, via the so called LUXI API
  • the master daemon and the node daemons, via the node RPC

There are also some additional interaction paths for exceptional cases:

  • CLI tools might access via SSH the nodes (for gnt-cluster copyfile and gnt-cluster command)
  • master failover is a special case when a non-master node will SSH and do node-RPC calls to the current master

The protocol between the master daemon and the node daemons will be changed from (Ganeti 1.2) Twisted PB (perspective broker) to HTTP(S), using a simple PUT/GET of JSON-encoded messages. This is done due to difficulties in working with the Twisted framework and its protocols in a multithreaded environment, which we can overcome by using a simpler stack (see the caveats section).

The protocol between the CLI/RAPI and the master daemon will be a custom one (called LUXI): on a UNIX socket on the master node, with rights restricted by filesystem permissions, the CLI/RAPI will talk to the master daemon using JSON-encoded messages.

The operations supported over this internal protocol will be encoded via a python library that will expose a simple API for its users. Internally, the protocol will simply encode all objects in JSON format and decode them on the receiver side.

For more details about the RAPI daemon see Remote API changes, and for the node daemon see Node daemon changes.

The LUXI protocol

As described above, the protocol for making requests or queries to the master daemon will be a UNIX-socket based simple RPC of JSON-encoded messages.

The choice of UNIX was in order to get rid of the need of authentication and authorisation inside Ganeti; for 2.0, the permissions on the Unix socket itself will determine the access rights.

We will have two main classes of operations over this API:

  • cluster query functions
  • job related functions

The cluster query functions are usually short-duration, and are the equivalent of the OP_QUERY_* opcodes in Ganeti 1.2 (and they are internally implemented still with these opcodes). The clients are guaranteed to receive the response in a reasonable time via a timeout.

The job-related functions will be:

  • submit job
  • query job (which could also be categorized in the query-functions)
  • archive job (see the job queue design doc)
  • wait for job change, which allows a client to wait without polling

For more details of the actual operation list, see the Job Queue.

Both requests and responses will consist of a JSON-encoded message followed by the ETX character (ASCII decimal 3), which is not a valid character in JSON messages and thus can serve as a message delimiter. The contents of the messages will be a dictionary with two fields:

method: the name of the method called
args: the arguments to the method, as a list (no keyword arguments allowed)

Responses will follow the same format, with the two fields being:

success: a boolean denoting the success of the operation
result: the actual result, or error message in case of failure

There are two special value for the result field:

  • in the case that the operation failed, and this field is a list of length two, the client library will try to interpret is as an exception, the first element being the exception type and the second one the actual exception arguments; this will allow a simple method of passing Ganeti-related exception across the interface
  • for the WaitForChange call (that waits on the server for a job to change status), if the result is equal to nochange instead of the usual result for this call (a list of changes), then the library will internally retry the call; this is done in order to differentiate internally between master daemon hung and job simply not changed

Users of the API that don't use the provided python library should take care of the above two cases.

Master daemon implementation

The daemon will be based around a main I/O thread that will wait for new requests from the clients, and that does the setup/shutdown of the other thread (pools).

There will two other classes of threads in the daemon:

  • job processing threads, part of a thread pool, and which are long-lived, started at daemon startup and terminated only at shutdown time
  • client I/O threads, which are the ones that talk the local protocol (LUXI) to the clients, and are short-lived
Master startup/failover

In Ganeti 1.x there is no protection against failing over the master to a node with stale configuration. In effect, the responsibility of correct failovers falls on the admin. This is true both for the new master and for when an old, offline master startup.

Since in 2.x we are extending the cluster state to cover the job queue and have a daemon that will execute by itself the job queue, we want to have more resilience for the master role.

The following algorithm will happen whenever a node is ready to transition to the master role, either at startup time or at node failover:

  1. read the configuration file and parse the node list contained within

  2. query all the nodes and make sure we obtain an agreement via a quorum of at least half plus one nodes for the following:

    • we have the latest configuration and job list (as determined by the serial number on the configuration and highest job ID on the job queue)
    • if we are not failing over (but just starting), the quorum agrees that we are the designated master
    • if any of the above is false, we prevent the current operation (i.e. we don't become the master)
  3. at this point, the node transitions to the master role

  4. for all the in-progress jobs, mark them as failed, with reason unknown or something similar (master failed, etc.)

Since due to exceptional conditions we could have a situation in which no node can become the master due to inconsistent data, we will have an override switch for the master daemon startup that will assume the current node has the right data and will replicate all the configuration files to the other nodes.

Note: the above algorithm is by no means an election algorithm; it is a confirmation of the master role currently held by a node.

Logging

The logging system will be switched completely to the standard python logging module; currently it's logging-based, but exposes a different API, which is just overhead. As such, the code will be switched over to standard logging calls, and only the setup will be custom.

With this change, we will remove the separate debug/info/error logs, and instead have always one logfile per daemon model:

  • master-daemon.log for the master daemon
  • node-daemon.log for the node daemon (this is the same as in 1.2)
  • rapi-daemon.log for the RAPI daemon logs
  • rapi-access.log, an additional log file for the RAPI that will be in the standard HTTP log format for possible parsing by other tools

Since the :term:`watcher` will only submit jobs to the master for startup of the instances, its log file will contain less information than before, mainly that it will start the instance, but not the results.

Node daemon changes

The only change to the node daemon is that, since we need better concurrency, we don't process the inter-node RPC calls in the node daemon itself, but we fork and process each request in a separate child.

Since we don't have many calls, and we only fork (not exec), the overhead should be minimal.

Caveats

A discussed alternative is to keep the current individual processes touching the cluster configuration model. The reasons we have not chosen this approach is:

  • the speed of reading and unserializing the cluster state today is not small enough that we can ignore it; the addition of the job queue will make the startup cost even higher. While this runtime cost is low, it can be on the order of a few seconds on bigger clusters, which for very quick commands is comparable to the actual duration of the computation itself
  • individual commands would make it harder to implement a fire-and-forget job request, along the lines "start this instance but do not wait for it to finish"; it would require a model of backgrounding the operation and other things that are much better served by a daemon-based model

Another area of discussion is moving away from Twisted in this new implementation. While Twisted has its advantages, there are also many disadvantages to using it:

  • first and foremost, it's not a library, but a framework; thus, if you use twisted, all the code needs to be 'twiste-ized' and written in an asynchronous manner, using deferreds; while this method works, it's not a common way to code and it requires that the entire process workflow is based around a single reactor (Twisted name for a main loop)
  • the more advanced granular locking that we want to implement would require, if written in the async-manner, deep integration with the Twisted stack, to such an extend that business-logic is inseparable from the protocol coding; we felt that this is an unreasonable request, and that a good protocol library should allow complete separation of low-level protocol calls and business logic; by comparison, the threaded approach combined with HTTPs protocol required (for the first iteration) absolutely no changes from the 1.2 code, and later changes for optimizing the inter-node RPC calls required just syntactic changes (e.g. rpc.call_... to self.rpc.call_...)

Another issue is with the Twisted API stability - during the Ganeti 1.x lifetime, we had to to implement many times workarounds to changes in the Twisted version, so that for example 1.2 is able to use both Twisted 2.x and 8.x.

In the end, since we already had an HTTP server library for the RAPI, we just reused that for inter-node communication.

Granular locking

We want to make sure that multiple operations can run in parallel on a Ganeti Cluster. In order for this to happen we need to make sure concurrently run operations don't step on each other toes and break the cluster.

This design addresses how we are going to deal with locking so that:

  • we preserve data coherency
  • we prevent deadlocks
  • we prevent job starvation

Reaching the maximum possible parallelism is a Non-Goal. We have identified a set of operations that are currently bottlenecks and need to be parallelised and have worked on those. In the future it will be possible to address other needs, thus making the cluster more and more parallel one step at a time.

This section only talks about parallelising Ganeti level operations, aka Logical Units, and the locking needed for that. Any other synchronization lock needed internally by the code is outside its scope.

Library details

The proposed library has these features:

  • internally managing all the locks, making the implementation transparent from their usage
  • automatically grabbing multiple locks in the right order (avoid deadlock)
  • ability to transparently handle conversion to more granularity
  • support asynchronous operation (future goal)

Locking will be valid only on the master node and will not be a distributed operation. Therefore, in case of master failure, the operations currently running will be aborted and the locks will be lost; it remains to the administrator to cleanup (if needed) the operation result (e.g. make sure an instance is either installed correctly or removed).

A corollary of this is that a master-failover operation with both masters alive needs to happen while no operations are running, and therefore no locks are held.

All the locks will be represented by objects (like lockings.SharedLock), and the individual locks for each object will be created at initialisation time, from the config file.

The API will have a way to grab one or more than one locks at the same time. Any attempt to grab a lock while already holding one in the wrong order will be checked for, and fail.

The Locks

At the first stage we have decided to provide the following locks:

  • One "config file" lock
  • One lock per node in the cluster
  • One lock per instance in the cluster

All the instance locks will need to be taken before the node locks, and the node locks before the config lock. Locks will need to be acquired at the same time for multiple instances and nodes, and internal ordering will be dealt within the locking library, which, for simplicity, will just use alphabetical order.

Each lock has the following three possible statuses:

  • unlocked (anyone can grab the lock)
  • shared (anyone can grab/have the lock but only in shared mode)
  • exclusive (no one else can grab/have the lock)
Handling conversion to more granularity

In order to convert to a more granular approach transparently each time we split a lock into more we'll create a "metalock", which will depend on those sub-locks and live for the time necessary for all the code to convert (or forever, in some conditions). When a metalock exists all converted code must acquire it in shared mode, so it can run concurrently, but still be exclusive with old code, which acquires it exclusively.

In the beginning the only such lock will be what replaces the current "command" lock, and will acquire all the locks in the system, before proceeding. This lock will be called the "Big Ganeti Lock" because holding that one will avoid any other concurrent Ganeti operations.

We might also want to devise more metalocks (eg. all nodes, all nodes+config) in order to make it easier for some parts of the code to acquire what it needs without specifying it explicitly.

In the future things like the node locks could become metalocks, should we decide to split them into an even more fine grained approach, but this will probably be only after the first 2.0 version has been released.

Adding/Removing locks

When a new instance or a new node is created an associated lock must be added to the list. The relevant code will need to inform the locking library of such a change.

This needs to be compatible with every other lock in the system, especially metalocks that guarantee to grab sets of resources without specifying them explicitly. The implementation of this will be handled in the locking library itself.

When instances or nodes disappear from the cluster the relevant locks must be removed. This is easier than adding new elements, as the code which removes them must own them exclusively already, and thus deals with metalocks exactly as normal code acquiring those locks. Any operation queuing on a removed lock will fail after its removal.

Asynchronous operations