Commit 6c2d0b44 authored by Iustin Pop's avatar Iustin Pop

More updates to the documentation

This patch removes the client-api.txt document (since it's obsoleted by
the documentation inside the design-2.0.rst document) and adds many
updates to the latter.

Reviewed-by: imsnah
parent b8195800
Notes about the client api
~~~~~~~~~~~~~~~~~~~~~~~~~~
Starting with Ganeti 1.3, the individual commands (gnt-...) do not
execute code directly, but instead via a master daemon. The
communication between these commands and the daemon will be language
agnostic, and we will be providing a python library implementing all
operations.
TODO: add tags support, add gnt-instance info implementation, document
all results from query and all opcode fields
Protocol
========
The protocol for communication will consist of passing JSON-encoded
messages over a UNIX socket. The protocol will be send message, receive
message, send message, ..., etc. Each message (either request or
response) will end (after the JSON message) with a ``ETX`` character
(ascii decimal 3), which is not a valid character in JSON messages and
thus can serve as a message delimiter. Quoting from the
http://www.json.org grammar::
char: any unicode character except " or \ or control character
There are three request types than can be done:
- submit job; a job is a sequence of opcodes that modify the cluster
- abort a job; in some circumstances, a job can be aborted; the exact
conditions depend on the master daemon implementation and clients
should not rely on being able to abort jobs
- query objects; this is a generic form of query that works for all
object types
All requests will be encoded as a JSON object, having three fields:
- ``request``: string, one of ``submit``, ``abort``, ``query``
- ``data``: the payload of the request, variable type based on request
- ``version``: the protocol version spoken by the client; we are
describing here the version 0
The response to any request will be a JSON object, with two fields:
- ``success``: either ``true`` or ``false`` denoting whether the
request was successful or not
- ``result``: the result of the request (depends on request type) if
successful, otherwise the error message (describing the failure)
The server has no defined upper-limit on the time it will take to
respond to a request, so the clients should implement their own timeout
handling. Note though that most requests should be answered quickly, if
the cluster is in a normal state.
Submit
------
The submit ``data`` field will be a JSON object - a (partial) Job
description. It will have the following fields:
- ``opcode_list``: a list of opcode objects, see below
The opcode objects supported will mostly be the ones supported by the
internal Ganeti implementation; currently there is no well-defined
definition of them (work in progress).
Each opcode will be represented in the message list as an object:
- ``OP_ID``: an opcode id; this will be equivalent to the ``OP_ID``
class attribute on opcodes (in lib/opcodes.py)
- other fields: as needed by the opcode in question
Small example, request::
{
"opcode_list": [
{
"instance_name": "instance1.example.com",
"OP_ID": "OP_INSTANCE_SHUTDOWN"
}
]
}
And response::
{
"result": "1104",
"success": true
}
The result of the submit request will be if successful, a single JSON
value denoting the job ID. While the job ID might be (or look like) an
integer, the clients should not depend on this and treat this ID as an
opaque identifier.
Abort
-----
The abort request data will be a single job ID (as returned by submit or
query jobs). The result will hold no data (i.e. it will be a JSON
``null`` value), if successful, and will be the error message if it
fails.
Query
-----
The ``data`` argument to the query request is a JSON object containing:
- ``object``: the object type to be queried
- ``names``: if querying a list of objects, this can restrict the
query to a subset of the entire list
- ``fields``: the list of informations that we are interested in
The valid values for the ``object`` field is:
- ``cluster``
- ``node``
- ``instance``
For the ``cluster`` object, the ``names`` parameter is unused and must
be null.
The result value will be a list of lists: each row in the top-level list
will hold the result for a single object, and each row in the per-object
results will hold a result field, in the same order as the ``fields``
value.
Small example, request::
{
"data": {
"fields": [
"name",
"admin_memory"
],
"object": "instance"
},
"request": "query"
}
And response::
{
"result": [
[
"instance1.example.com",
"128"
],
[
"instance2.example.com",
"4096"
]
],
"success": true
}
......@@ -22,7 +22,7 @@ then the restrictions.
Background
==========
While Ganeti 1.2 is usable, it severly limits the flexibility of the
While Ganeti 1.2 is usable, it severely limits the flexibility of the
cluster administration and imposes a very rigid model. It has the
following main scalability issues:
......@@ -33,13 +33,19 @@ following main scalability issues:
It also has a number of artificial restrictions, due to historical design:
- fixed number of disks (two) per instance
- fixed number of nics
- fixed number of NICs
.. [#] Replace disks will release the lock, but this is an exception
and not a recommended way to operate
The 2.0 version is intended to address some of these problems, and
create a more flexible codebase for future developments.
create a more flexible code base for future developments.
Among these problems, the single-operation at a time restriction is
biggest issue with the current version of Ganeti. It is such a big
impediment in operating bigger clusters that many times one is tempted
to remove the lock just to do a simple operation like start instance
while an OS installation is running.
Scalability problems
--------------------
......@@ -60,10 +66,10 @@ lock, without a real reason for that to happen.
One of the main causes of this global lock (beside the higher
difficulty of ensuring data consistency in a more granular lock model)
is the fact that currently there is no "master" daemon in Ganeti. Each
command tries to acquire the so called *cmd* lock and when it
succeeds, it takes complete ownership of the cluster configuration and
state.
is the fact that currently there is no long-lived process in Ganeti
that can coordinate multiple operations. Each command tries to acquire
the so called *cmd* lock and when it succeeds, it takes complete
ownership of the cluster configuration and state.
Other scalability problems are due the design of the DRBD device
model, which assumed at its creation a low (one to four) number of
......@@ -77,6 +83,14 @@ instance model. This is a purely artificial restrictions, but it
touches multiple areas (configuration, import/export, command line)
that it's more fitted to a major release than a minor one.
Architecture issues
-------------------
The fact that each command is a separate process that reads the
cluster state, executes the command, and saves the new state is also
an issue on big clusters where the configuration data for the cluster
begins to be non-trivial in size.
Overview
========
......@@ -109,24 +123,23 @@ Core changes
The main changes will be switching from a per-process model to a
daemon based model, where the individual gnt-* commands will be
clients that talk to this daemon (see the design-2.0-master-daemon
document). This will allow us to get rid of the global cluster lock
for most operations, having instead a per-object lock (see
design-2.0-granular-locking). Also, the daemon will be able to queue
jobs, and this will allow the invidual clients to submit jobs without
waiting for them to finish, and also see the result of old requests
(see design-2.0-job-queue).
clients that talk to this daemon (see `Master daemon`_). This will
allow us to get rid of the global cluster lock for most operations,
having instead a per-object lock (see `Granular locking`_). Also, the
daemon will be able to queue jobs, and this will allow the individual
clients to submit jobs without waiting for them to finish, and also
see the result of old requests (see `Job Queue`_).
Beside these major changes, another 'core' change but that will not be
as visible to the users will be changing the model of object attribute
storage, and separate that into namespaces (such that an Xen PVM
storage, and separate that into name spaces (such that an Xen PVM
instance will not have the Xen HVM parameters). This will allow future
flexibility in defining additional parameters. More details in the
design-2.0-cluster-parameters document.
flexibility in defining additional parameters. For more details see
`Object parameters`_.
The various changes brought in by the master daemon model and the
read-write RAPI will require changes to the cluster security; we move
away from Twisted and use http(s) for intra- and extra-cluster
away from Twisted and use HTTP(s) for intra- and extra-cluster
communications. For more details, see the security document in the
doc/ directory.
......@@ -140,36 +153,57 @@ In Ganeti 2.0, we will have the following *entities*:
- the command line tools (on the master node)
- the RAPI daemon (on the master node)
Interaction paths are between:
The master-daemon related interaction paths are:
- (CLI tools/RAPI daemon) and the master daemon, via the so called *luxi* API
- (CLI tools/RAPI daemon) and the master daemon, via the so called *LUXI* API
- the master daemon and the node daemons, via the node RPC
There are also some additional interaction paths for exceptional cases:
- CLI tools might access via SSH the nodes (for ``gnt-cluster copyfile``
and ``gnt-cluster command``)
- master failover is a special case when a non-master node will SSH
and do node-RPC calls to the current master
The protocol between the master daemon and the node daemons will be
changed to HTTP(S), using a simple PUT/GET of JSON-encoded
messages. This is done due to difficulties in working with the Twisted
framework and its protocols in a multithreaded environment, which we
can overcome by using a simpler stack (see the caveats section). The
protocol between the CLI/RAPI and the master daemon will be a custom
one (called *luxi*): on a UNIX socket on the master node, with rights
restricted by filesystem permissions, the CLI/RAPI will talk to the
master daemon using JSON-encoded messages.
changed from (Ganeti 1.2) Twisted PB (perspective broker) to HTTP(S),
using a simple PUT/GET of JSON-encoded messages. This is done due to
difficulties in working with the Twisted framework and its protocols
in a multithreaded environment, which we can overcome by using a
simpler stack (see the caveats section).
The protocol between the CLI/RAPI and the master daemon will be a
custom one (called *LUXI*): on a UNIX socket on the master node, with
rights restricted by filesystem permissions, the CLI/RAPI will talk to
the master daemon using JSON-encoded messages.
The operations supported over this internal protocol will be encoded
via a python library that will expose a simple API for its
users. Internally, the protocol will simply encode all objects in JSON
format and decode them on the receiver side.
For more details about the RAPI daemon see `Remote API changes`_, and
for the node daemon see `Node daemon changes`_.
The LUXI protocol
+++++++++++++++++
We will have two main classes of operations over the master daemon API:
As described above, the protocol for making requests or queries to the
master daemon will be a UNIX-socket based simple RPC of JSON-encoded
messages.
The choice of UNIX was in order to get rid of the need of
authentication and authorisation inside Ganeti; for 2.0, the
permissions on the Unix socket itself will determine the access
rights.
We will have two main classes of operations over this API:
- cluster query functions
- job related functions
The cluster query functions are usually short-duration, and are the
equivalent of the OP_QUERY_* opcodes in ganeti 1.2 (and they are
equivalent of the ``OP_QUERY_*`` opcodes in Ganeti 1.2 (and they are
internally implemented still with these opcodes). The clients are
guaranteed to receive the response in a reasonable time via a timeout.
......@@ -180,10 +214,45 @@ The job-related functions will be:
- archive job (see the job queue design doc)
- wait for job change, which allows a client to wait without polling
For more details, see the job queue design document.
For more details of the actual operation list, see the `Job Queue`_.
Daemon implementation
+++++++++++++++++++++
Both requests and responses will consist of a JSON-encoded message
followed by the ``ETX`` character (ASCII decimal 3), which is not a
valid character in JSON messages and thus can serve as a message
delimiter. The contents of the messages will be a dictionary with two
fields:
:method:
the name of the method called
:args:
the arguments to the method, as a list (no keyword arguments allowed)
Responses will follow the same format, with the two fields being:
:success:
a boolean denoting the success of the operation
:result:
the actual result, or error message in case of failure
There are two special value for the result field:
- in the case that the operation failed, and this field is a list of
length two, the client library will try to interpret is as an exception,
the first element being the exception type and the second one the
actual exception arguments; this will allow a simple method of passing
Ganeti-related exception across the interface
- for the *WaitForChange* call (that waits on the server for a job to
change status), if the result is equal to ``nochange`` instead of the
usual result for this call (a list of changes), then the library will
internally retry the call; this is done in order to differentiate
internally between master daemon hung and job simply not changed
Users of the API that don't use the provided python library should
take care of the above two cases.
Master daemon implementation
++++++++++++++++++++++++++++
The daemon will be based around a main I/O thread that will wait for
new requests from the clients, and that does the setup/shutdown of the
......@@ -195,7 +264,7 @@ There will two other classes of threads in the daemon:
long-lived, started at daemon startup and terminated only at shutdown
time
- client I/O threads, which are the ones that talk the local protocol
to the clients
(LUXI) to the clients, and are short-lived
Master startup/failover
+++++++++++++++++++++++
......@@ -229,19 +298,30 @@ failover:
- if we are not failing over (but just starting), the
quorum agrees that we are the designated master
- if any of the above is false, we prevent the current operation
(i.e. we don't become the master)
#. at this point, the node transitions to the master role
#. for all the in-progress jobs, mark them as failed, with
reason unknown or something similar (master failed, etc.)
Since due to exceptional conditions we could have a situation in which
no node can become the master due to inconsistent data, we will have
an override switch for the master daemon startup that will assume the
current node has the right data and will replicate all the
configuration files to the other nodes.
**Note**: the above algorithm is by no means an election algorithm; it
is a *confirmation* of the master role currently held by a node.
Logging
+++++++
The logging system will be switched completely to the logging module;
currently it's logging-based, but exposes a different API, which is
just overhead. As such, the code will be switched over to standard
logging calls, and only the setup will be custom.
The logging system will be switched completely to the standard python
logging module; currently it's logging-based, but exposes a different
API, which is just overhead. As such, the code will be switched over
to standard logging calls, and only the setup will be custom.
With this change, we will remove the separate debug/info/error logs,
and instead have always one logfile per daemon model:
......@@ -250,11 +330,22 @@ and instead have always one logfile per daemon model:
- node-daemon.log for the node daemon (this is the same as in 1.2)
- rapi-daemon.log for the RAPI daemon logs
- rapi-access.log, an additional log file for the RAPI that will be
in the standard http log format for possible parsing by other tools
in the standard HTTP log format for possible parsing by other tools
Since the `watcher`_ will only submit jobs to the master for startup
of the instances, its log file will contain less information than
before, mainly that it will start the instance, but not the results.
Node daemon changes
+++++++++++++++++++
The only change to the node daemon is that, since we need better
concurrency, we don't process the inter-node RPC calls in the node
daemon itself, but we fork and process each request in a separate
child.
Since the watcher will only submit jobs to the master for startup of
the instances, its log file will contain less information than before,
mainly that it will start the instance, but not the results.
Since we don't have many calls, and we only fork (not exec), the
overhead should be minimal.
Caveats
+++++++
......@@ -277,20 +368,33 @@ chosen this approach is:
much better served by a daemon-based model
Another area of discussion is moving away from Twisted in this new
implementation. While Twisted hase its advantages, there are also many
disatvantanges to using it:
implementation. While Twisted has its advantages, there are also many
disadvantages to using it:
- first and foremost, it's not a library, but a framework; thus, if
you use twisted, all the code needs to be 'twiste-ized'; we were able
to keep the 1.x code clean by hacking around twisted in an
unsupported, unrecommended way, and the only alternative would have
been to make all the code be written for twisted
- it has some weaknesses in working with multiple threads, since its base
model is designed to replace thread usage by using deferred calls, so while
it can use threads, it's not less flexible in doing so
And, since we already have an HTTP server library for the RAPI, we
can just reuse that for inter-node communication.
you use twisted, all the code needs to be 'twiste-ized' and written
in an asynchronous manner, using deferreds; while this method works,
it's not a common way to code and it requires that the entire process
workflow is based around a single *reactor* (Twisted name for a main
loop)
- the more advanced granular locking that we want to implement would
require, if written in the async-manner, deep integration with the
Twisted stack, to such an extend that business-logic is inseparable
from the protocol coding; we felt that this is an unreasonable request,
and that a good protocol library should allow complete separation of
low-level protocol calls and business logic; by comparison, the threaded
approach combined with HTTPs protocol required (for the first iteration)
absolutely no changes from the 1.2 code, and later changes for optimizing
the inter-node RPC calls required just syntactic changes (e.g.
``rpc.call_...`` to ``self.rpc.call_...``)
Another issue is with the Twisted API stability - during the Ganeti
1.x lifetime, we had to to implement many times workarounds to changes
in the Twisted version, so that for example 1.2 is able to use both
Twisted 2.x and 8.x.
In the end, since we already had an HTTP server library for the RAPI,
we just reused that for inter-node communication.
Granular locking
......@@ -302,48 +406,49 @@ operations don't step on each other toes and break the cluster.
This design addresses how we are going to deal with locking so that:
- high urgency operations are not stopped by long length ones
- long length operations can run in parallel
- we preserve safety (data coherency) and liveness (no deadlock, no work
postponed indefinitely) on the cluster
- we preserve data coherency
- we prevent deadlocks
- we prevent job starvation
Reaching the maximum possible parallelism is a Non-Goal. We have identified a
set of operations that are currently bottlenecks and need to be parallelised
and have worked on those. In the future it will be possible to address other
needs, thus making the cluster more and more parallel one step at a time.
This document only talks about parallelising Ganeti level operations, aka
Logical Units, and the locking needed for that. Any other synchronisation lock
This section only talks about parallelising Ganeti level operations, aka
Logical Units, and the locking needed for that. Any other synchronization lock
needed internally by the code is outside its scope.
Ganeti 1.2
++++++++++
We intend to implement a Ganeti locking library, which can be used by the
various ganeti code components in order to easily, efficiently and correctly
grab the locks they need to perform their function.
Library details
+++++++++++++++
The proposed library has these features:
- Internally managing all the locks, making the implementation transparent
- internally managing all the locks, making the implementation transparent
from their usage
- Automatically grabbing multiple locks in the right order (avoid deadlock)
- Ability to transparently handle conversion to more granularity
- Support asynchronous operation (future goal)
Locking will be valid only on the master node and will not be a distributed
operation. In case of master failure, though, if some locks were held it means
some opcodes were in progress, so when recovery of the job queue is done it
will be possible to determine by the interrupted opcodes which operations could
have been left half way through and thus which locks could have been held. It
is then the responsibility either of the master failover code, of the cluster
verification code, or of the admin to do what's necessary to make sure that any
leftover state is dealt with. This is not an issue from a locking point of view
because the fact that the previous master has failed means that it cannot do
any job.
A corollary of this is that a master-failover operation with both masters alive
needs to happen while no other locks are held.
- automatically grabbing multiple locks in the right order (avoid deadlock)
- ability to transparently handle conversion to more granularity
- support asynchronous operation (future goal)
Locking will be valid only on the master node and will not be a
distributed operation. Therefore, in case of master failure, the
operations currently running will be aborted and the locks will be
lost; it remains to the administrator to cleanup (if needed) the
operation result (e.g. make sure an instance is either installed
correctly or removed).
A corollary of this is that a master-failover operation with both
masters alive needs to happen while no operations are running, and
therefore no locks are held.
All the locks will be represented by objects (like
``lockings.SharedLock``), and the individual locks for each object
will be created at initialisation time, from the config file.
The API will have a way to grab one or more than one locks at the same time.
Any attempt to grab a lock while already holding one in the wrong order will be
checked for, and fail.
The Locks
+++++++++
......@@ -360,12 +465,18 @@ time for multiple instances and nodes, and internal ordering will be dealt
within the locking library, which, for simplicity, will just use alphabetical
order.
Each lock has the following three possible statuses:
- unlocked (anyone can grab the lock)
- shared (anyone can grab/have the lock but only in shared mode)
- exclusive (no one else can grab/have the lock)
Handling conversion to more granularity
+++++++++++++++++++++++++++++++++++++++
In order to convert to a more granular approach transparently each time we
split a lock into more we'll create a "metalock", which will depend on those
sublocks and live for the time necessary for all the code to convert (or
sub-locks and live for the time necessary for all the code to convert (or
forever, in some conditions). When a metalock exists all converted code must
acquire it in shared mode, so it can run concurrently, but still be exclusive
with old code, which acquires it exclusively.
......@@ -373,7 +484,7 @@ with old code, which acquires it exclusively.
In the beginning the only such lock will be what replaces the current "command"
lock, and will acquire all the locks in the system, before proceeding. This
lock will be called the "Big Ganeti Lock" because holding that one will avoid
any other concurrent ganeti operations.
any other concurrent Ganeti operations.
We might also want to devise more metalocks (eg. all nodes, all nodes+config)
in order to make it easier for some parts of the code to acquire what it needs
......@@ -383,16 +494,6 @@ In the future things like the node locks could become metalocks, should we
decide to split them into an even more fine grained approach, but this will
probably be only after the first 2.0 version has been released.
Library API
+++++++++++
All the locking will be its own class, and the locks will be created at
initialisation time, from the config file.
The API will have a way to grab one or more than one locks at the same time.
Any attempt to grab a lock while already holding one in the wrong order will be
checked for, and fail.
Adding/Removing locks
+++++++++++++++++++++
......@@ -405,11 +506,11 @@ metalocks that guarantee to grab sets of resources without specifying them
explicitly. The implementation of this will be handled in the locking library
itself.
Of course when instances or nodes disappear from the cluster the relevant locks
must be removed. This is easier than adding new elements, as the code which
removes them must own them exclusively or can queue for their ownership, and
thus deals with metalocks exactly as normal code acquiring those locks. Any
operation queueing on a removed lock will fail after its removal.
When instances or nodes disappear from the cluster the relevant locks
must be removed. This is easier than adding new elements, as the code
which removes them must own them exclusively already, and thus deals
with metalocks exactly as normal code acquiring those locks. Any
operation queuing on a removed lock will fail after its removal.
Asynchronous operations
+++++++++++++++++++++++
......@@ -421,8 +522,8 @@ the request is impossible or somehow erroneous.
In the future we may want to implement different types of asynchronous
operations such as:
- Try to acquire this lock set and fail if not possible
- Try to acquire one of these lock sets and return the first one you were
- try to acquire this lock set and fail if not possible
- try to acquire one of these lock sets and return the first one you were
able to get (or after a timeout) (select/poll like)
These operations can be used to prioritize operations based on available locks,
......@@ -441,8 +542,8 @@ level. In the future we may want to split logical units in independent
"tasklets" with their own locking requirements. A different design doc (or mini
design doc) will cover the move from Logical Units to tasklets.
Lock acquisition code path
++++++++++++++++++++++++++
Code examples
+++++++++++++
In general when acquiring locks we should use a code path equivalent to::
......@@ -453,9 +554,11 @@ In general when acquiring locks we should use a code path equivalent to::
finally:
lock.release()
This makes sure we release all locks, and avoid possible deadlocks. Of course
extra care must be used not to leave, if possible locked structures in an
unusable state.
This makes sure we release all locks, and avoid possible deadlocks. Of
course extra care must be used not to leave, if possible locked
structures in an unusable state. Note that with Python 2.5 a simpler
syntax will be possible, but we want to keep compatibility with Python
2.4 so the new constructs should not be used.
In order to avoid this extra indentation and code changes everywhere in the
Logical Units code, we decided to allow LUs to declare locks, and then execute
......@@ -500,7 +603,7 @@ Granular locking is not enough to speed up operations, we also need a