-
Michael Hanselmann authored
Signed-off-by:
Michael Hanselmann <hansmi@google.com> Reviewed-by:
Iustin Pop <iustin@google.com>
7faf5110
Ganeti 2.0 design
This document describes the major changes in Ganeti 2.0 compared to the 1.2 version.
The 2.0 version will constitute a rewrite of the 'core' architecture, paving the way for additional features in future 2.x versions.
Contents
Objective
Ganeti 1.2 has many scalability issues and restrictions due to its roots as software for managing small and 'static' clusters.
Version 2.0 will attempt to remedy first the scalability issues and then the restrictions.
Background
While Ganeti 1.2 is usable, it severely limits the flexibility of the cluster administration and imposes a very rigid model. It has the following main scalability issues:
- only one operation at a time on the cluster [1]
- poor handling of node failures in the cluster
- mixing hypervisors in a cluster not allowed
It also has a number of artificial restrictions, due to historical design:
- fixed number of disks (two) per instance
- fixed number of NICs
[1] | Replace disks will release the lock, but this is an exception and not a recommended way to operate |
The 2.0 version is intended to address some of these problems, and create a more flexible code base for future developments.
Among these problems, the single-operation at a time restriction is biggest issue with the current version of Ganeti. It is such a big impediment in operating bigger clusters that many times one is tempted to remove the lock just to do a simple operation like start instance while an OS installation is running.
Scalability problems
Ganeti 1.2 has a single global lock, which is used for all cluster operations. This has been painful at various times, for example:
- It is impossible for two people to efficiently interact with a cluster (for example for debugging) at the same time.
- When batch jobs are running it's impossible to do other work (for example failovers/fixes) on a cluster.
This poses scalability problems: as clusters grow in node and instance size it's a lot more likely that operations which one could conceive should run in parallel (for example because they happen on different nodes) are actually stalling each other while waiting for the global lock, without a real reason for that to happen.
One of the main causes of this global lock (beside the higher difficulty of ensuring data consistency in a more granular lock model) is the fact that currently there is no long-lived process in Ganeti that can coordinate multiple operations. Each command tries to acquire the so called cmd lock and when it succeeds, it takes complete ownership of the cluster configuration and state.
Other scalability problems are due the design of the DRBD device model, which assumed at its creation a low (one to four) number of instances per node, which is no longer true with today's hardware.
Artificial restrictions
Ganeti 1.2 (and previous versions) have a fixed two-disks, one-NIC per instance model. This is a purely artificial restrictions, but it touches multiple areas (configuration, import/export, command line) that it's more fitted to a major release than a minor one.
Architecture issues
The fact that each command is a separate process that reads the cluster state, executes the command, and saves the new state is also an issue on big clusters where the configuration data for the cluster begins to be non-trivial in size.
Overview
In order to solve the scalability problems, a rewrite of the core design of Ganeti is required. While the cluster operations themselves won't change (e.g. start instance will do the same things, the way these operations are scheduled internally will change radically.
The new design will change the cluster architecture to:

This differs from the 1.2 architecture by the addition of the master daemon, which will be the only entity to talk to the node daemons.
Detailed design
The changes for 2.0 can be split into roughly three areas:
- core changes that affect the design of the software
- features (or restriction removals) but which do not have a wide impact on the design
- user-level and API-level changes which translate into differences for the operation of the cluster
Core changes
The main changes will be switching from a per-process model to a daemon based model, where the individual gnt-* commands will be clients that talk to this daemon (see Master daemon). This will allow us to get rid of the global cluster lock for most operations, having instead a per-object lock (see Granular locking). Also, the daemon will be able to queue jobs, and this will allow the individual clients to submit jobs without waiting for them to finish, and also see the result of old requests (see Job Queue).
Beside these major changes, another 'core' change but that will not be as visible to the users will be changing the model of object attribute storage, and separate that into name spaces (such that an Xen PVM instance will not have the Xen HVM parameters). This will allow future flexibility in defining additional parameters. For more details see Object parameters.
The various changes brought in by the master daemon model and the read-write RAPI will require changes to the cluster security; we move away from Twisted and use HTTP(s) for intra- and extra-cluster communications. For more details, see the security document in the doc/ directory.
Master daemon
In Ganeti 2.0, we will have the following entities:
- the master daemon (on the master node)
- the node daemon (on all nodes)
- the command line tools (on the master node)
- the RAPI daemon (on the master node)
The master-daemon related interaction paths are:
- (CLI tools/RAPI daemon) and the master daemon, via the so called LUXI API
- the master daemon and the node daemons, via the node RPC
There are also some additional interaction paths for exceptional cases:
- CLI tools might access via SSH the nodes (for
gnt-cluster copyfile
andgnt-cluster command
) - master failover is a special case when a non-master node will SSH and do node-RPC calls to the current master
The protocol between the master daemon and the node daemons will be changed from (Ganeti 1.2) Twisted PB (perspective broker) to HTTP(S), using a simple PUT/GET of JSON-encoded messages. This is done due to difficulties in working with the Twisted framework and its protocols in a multithreaded environment, which we can overcome by using a simpler stack (see the caveats section).
The protocol between the CLI/RAPI and the master daemon will be a custom one (called LUXI): on a UNIX socket on the master node, with rights restricted by filesystem permissions, the CLI/RAPI will talk to the master daemon using JSON-encoded messages.
The operations supported over this internal protocol will be encoded via a python library that will expose a simple API for its users. Internally, the protocol will simply encode all objects in JSON format and decode them on the receiver side.
For more details about the RAPI daemon see Remote API changes, and for the node daemon see Node daemon changes.
The LUXI protocol
As described above, the protocol for making requests or queries to the master daemon will be a UNIX-socket based simple RPC of JSON-encoded messages.
The choice of UNIX was in order to get rid of the need of authentication and authorisation inside Ganeti; for 2.0, the permissions on the Unix socket itself will determine the access rights.
We will have two main classes of operations over this API:
- cluster query functions
- job related functions
The cluster query functions are usually short-duration, and are the
equivalent of the OP_QUERY_*
opcodes in Ganeti 1.2 (and they are
internally implemented still with these opcodes). The clients are
guaranteed to receive the response in a reasonable time via a timeout.
The job-related functions will be:
- submit job
- query job (which could also be categorized in the query-functions)
- archive job (see the job queue design doc)
- wait for job change, which allows a client to wait without polling
For more details of the actual operation list, see the Job Queue.
Both requests and responses will consist of a JSON-encoded message
followed by the ETX
character (ASCII decimal 3), which is not a
valid character in JSON messages and thus can serve as a message
delimiter. The contents of the messages will be a dictionary with two
fields:
method: | the name of the method called |
---|---|
args: | the arguments to the method, as a list (no keyword arguments allowed) |
Responses will follow the same format, with the two fields being:
success: | a boolean denoting the success of the operation |
---|---|
result: | the actual result, or error message in case of failure |
There are two special value for the result field:
- in the case that the operation failed, and this field is a list of length two, the client library will try to interpret is as an exception, the first element being the exception type and the second one the actual exception arguments; this will allow a simple method of passing Ganeti-related exception across the interface
- for the WaitForChange call (that waits on the server for a job to
change status), if the result is equal to
nochange
instead of the usual result for this call (a list of changes), then the library will internally retry the call; this is done in order to differentiate internally between master daemon hung and job simply not changed
Users of the API that don't use the provided python library should take care of the above two cases.
Master daemon implementation
The daemon will be based around a main I/O thread that will wait for new requests from the clients, and that does the setup/shutdown of the other thread (pools).
There will two other classes of threads in the daemon:
- job processing threads, part of a thread pool, and which are long-lived, started at daemon startup and terminated only at shutdown time
- client I/O threads, which are the ones that talk the local protocol (LUXI) to the clients, and are short-lived
Master startup/failover
In Ganeti 1.x there is no protection against failing over the master to a node with stale configuration. In effect, the responsibility of correct failovers falls on the admin. This is true both for the new master and for when an old, offline master startup.
Since in 2.x we are extending the cluster state to cover the job queue and have a daemon that will execute by itself the job queue, we want to have more resilience for the master role.
The following algorithm will happen whenever a node is ready to transition to the master role, either at startup time or at node failover:
-
read the configuration file and parse the node list contained within
-
query all the nodes and make sure we obtain an agreement via a quorum of at least half plus one nodes for the following:
- we have the latest configuration and job list (as determined by the serial number on the configuration and highest job ID on the job queue)
- there is not even a single node having a newer configuration file
- if we are not failing over (but just starting), the quorum agrees that we are the designated master
- if any of the above is false, we prevent the current operation (i.e. we don't become the master)
-
at this point, the node transitions to the master role
-
for all the in-progress jobs, mark them as failed, with reason unknown or something similar (master failed, etc.)
Since due to exceptional conditions we could have a situation in which no node can become the master due to inconsistent data, we will have an override switch for the master daemon startup that will assume the current node has the right data and will replicate all the configuration files to the other nodes.
Note: the above algorithm is by no means an election algorithm; it is a confirmation of the master role currently held by a node.
Logging
The logging system will be switched completely to the standard python logging module; currently it's logging-based, but exposes a different API, which is just overhead. As such, the code will be switched over to standard logging calls, and only the setup will be custom.
With this change, we will remove the separate debug/info/error logs, and instead have always one logfile per daemon model:
- master-daemon.log for the master daemon
- node-daemon.log for the node daemon (this is the same as in 1.2)
- rapi-daemon.log for the RAPI daemon logs
- rapi-access.log, an additional log file for the RAPI that will be in the standard HTTP log format for possible parsing by other tools
Since the :term:`watcher` will only submit jobs to the master for startup of the instances, its log file will contain less information than before, mainly that it will start the instance, but not the results.
Node daemon changes
The only change to the node daemon is that, since we need better concurrency, we don't process the inter-node RPC calls in the node daemon itself, but we fork and process each request in a separate child.
Since we don't have many calls, and we only fork (not exec), the overhead should be minimal.
Caveats
A discussed alternative is to keep the current individual processes touching the cluster configuration model. The reasons we have not chosen this approach is: