-
Guido Trotter authored
The 2.1 and 2.2 designs contain sections with no actual content, as they are detailed for each single change. Removing the global empty ones. Signed-off-by:
Guido Trotter <ultrotter@google.com> Reviewed-by:
Manuel Franceschini <livewire@google.com>
a3427fe3
Ganeti 2.1 design
This document describes the major changes in Ganeti 2.1 compared to the 2.0 version.
The 2.1 version will be a relatively small release. Its main aim is to avoid changing too much of the core code, while addressing issues and adding new features and improvements over 2.0, in a timely fashion.
Contents
- Objective
- Detailed design
Objective
Ganeti 2.1 will add features to help further automatization of cluster operations, further improve scalability to even bigger clusters, and make it easier to debug the Ganeti core.
Detailed design
As for 2.0 we divide the 2.1 design into three areas:
- core changes, which affect the master daemon/job queue/locking or all/most logical units
- logical unit/feature changes
- external interface changes (eg. command line, os api, hooks, ...)
Core changes
Storage units modelling
Currently, Ganeti has a good model of the block devices for instances (e.g. LVM logical volumes, files, DRBD devices, etc.) but none of the storage pools that are providing the space for these front-end devices. For example, there are hardcoded inter-node RPC calls for volume group listing, file storage creation/deletion, etc.
The storage units framework will implement a generic handling for all kinds of storage backends:
- LVM physical volumes
- LVM volume groups
- File-based storage directories
- any other future storage method
There will be a generic list of methods that each storage unit type will provide, like:
- list of storage units of this type
- check status of the storage unit
Additionally, there will be specific methods for each method, for example:
- enable/disable allocations on a specific PV
- file storage directory creation/deletion
- VG consistency fixing
This will allow a much better modeling and unification of the various RPC calls related to backend storage pool in the future. Ganeti 2.1 is intended to add the basics of the framework, and not necessarilly move all the curent VG/FileBased operations to it.
Note that while we model both LVM PVs and LVM VGs, the framework will
not model any relationship between the different types. In other
words, we don't model neither inheritances nor stacking, since this is
too complex for our needs. While a vgreduce
operation on a LVM VG
could actually remove a PV from it, this will not be handled at the
framework level, but at individual operation level. The goal is that
this is a lightweight framework, for abstracting the different storage
operation, and not for modelling the storage hierarchy.
Locking improvements
Current State and shortcomings
The class LockSet
(see lib/locking.py
) is a container for one or
many SharedLock
instances. It provides an interface to add/remove
locks and to acquire and subsequently release any number of those locks
contained in it.
Locks in a LockSet
are always acquired in alphabetic order. Due to
the way we're using locks for nodes and instances (the single cluster
lock isn't affected by this issue) this can lead to long delays when
acquiring locks if another operation tries to acquire multiple locks but
has to wait for yet another operation.
In the following demonstration we assume to have the instance locks
inst1
, inst2
, inst3
and inst4
.
- Operation A grabs lock for instance
inst4
. - Operation B wants to acquire all instance locks in alphabetic order,
but it has to wait for
inst4
. - Operation C tries to lock
inst1
, but it has to wait until Operation B (which is trying to acquire all locks) releases the lock again. - Operation A finishes and releases lock on
inst4
. Operation B can continue and eventually releases all locks. - Operation C can get
inst1
lock and finishes.
Technically there's no need for Operation C to wait for Operation A, and
subsequently Operation B, to finish. Operation B can't continue until
Operation A is done (it has to wait for inst4
), anyway.
Proposed changes
Non-blocking lock acquiring
Acquiring locks for OpCode execution is always done in blocking mode. They won't return until the lock has successfully been acquired (or an error occurred, although we won't cover that case here).
SharedLock
and LockSet
must be able to be acquired in a
non-blocking way. They must support a timeout and abort trying to
acquire the lock(s) after the specified amount of time.
Retry acquiring locks
To prevent other operations from waiting for a long time, such as
described in the demonstration before, LockSet
must not keep locks
for a prolonged period of time when trying to acquire two or more locks.
Instead it should, with an increasing timeout for acquiring all locks,
release all locks again and sleep some time if it fails to acquire all
requested locks.
A good timeout value needs to be determined. In any case should
LockSet
proceed to acquire locks in blocking mode after a few
(unsuccessful) attempts to acquire all requested locks.
One proposal for the timeout is to use 2**tries
seconds, where
tries
is the number of unsuccessful tries.
In the demonstration before this would allow Operation C to continue
after Operation B unsuccessfully tried to acquire all locks and released
all acquired locks (inst1
, inst2
and inst3
) again.
Other solutions discussed
There was also some discussion on going one step further and extend the
job queue (see lib/jqueue.py
) to select the next task for a worker
depending on whether it can acquire the necessary locks. While this may
reduce the number of necessary worker threads and/or increase throughput
on large clusters with many jobs, it also brings many potential
problems, such as contention and increased memory usage, with it. As
this would be an extension of the changes proposed before it could be
implemented at a later point in time, but we decided to stay with the
simpler solution for now.
Implementation details
SharedLock
redesign
The current design of SharedLock
is not good for supporting timeouts
when acquiring a lock and there are also minor fairness issues in it. We
plan to address both with a redesign. A proof of concept implementation
was written and resulted in significantly simpler code.
Currently SharedLock
uses two separate queues for shared and
exclusive acquires and waiters get to run in turns. This means if an
exclusive acquire is released, the lock will allow shared waiters to run
and vice versa. Although it's still fair in the end there is a slight
bias towards shared waiters in the current implementation. The same
implementation with two shared queues can not support timeouts without
adding a lot of complexity.
Our proposed redesign changes SharedLock
to have only one single
queue. There will be one condition (see Condition for a note about
performance) in the queue per exclusive acquire and two for all shared
acquires (see below for an explanation). The maximum queue length will
always be 2 + (number of exclusive acquires waiting)
. The number of
queue entries for shared acquires can vary from 0 to 2.
The two conditions for shared acquires are a bit special. They will be used in turn. When the lock is instantiated, no conditions are in the queue. As soon as the first shared acquire arrives (and there are holder(s) or waiting acquires; see Acquire), the active condition is added to the queue. Until it becomes the topmost condition in the queue and has been notified, any shared acquire is added to this active condition. When the active condition is notified, the conditions are swapped and further shared acquires are added to the previously inactive condition (which has now become the active condition). After all waiters on the previously active (now inactive) and now notified condition received the notification, it is removed from the queue of pending acquires.
This means shared acquires will skip any exclusive acquire in the queue. We believe it's better to improve parallelization on operations only asking for shared (or read-only) locks. Exclusive operations holding the same lock can not be parallelized.
Acquire
For exclusive acquires a new condition is created and appended to the queue. Shared acquires are added to the active condition for shared acquires and if the condition is not yet on the queue, it's appended.
The next step is to wait for our condition to be on the top of the queue (to guarantee fairness). If the timeout expired, we return to the caller without acquiring the lock. On every notification we check whether the lock has been deleted, in which case an error is returned to the caller.
The lock can be acquired if we're on top of the queue (there is no one else ahead of us). For an exclusive acquire, there must not be other exclusive or shared holders. For a shared acquire, there must not be an exclusive holder. If these conditions are all true, the lock is acquired and we return to the caller. In any other case we wait again on the condition.
If it was the last waiter on a condition, the condition is removed from the queue.
Optimization: There's no need to touch the queue if there are no pending acquires and no current holders. The caller can have the lock immediately.

Release
First the lock removes the caller from the internal owner list. If there are pending acquires in the queue, the first (the oldest) condition is notified.
If the first condition was the active condition for shared acquires, the inactive condition will be made active. This ensures fairness with exclusive locks by forcing consecutive shared acquires to wait in the queue.

Delete
The caller must either hold the lock in exclusive mode already or the lock must be acquired in exclusive mode. Trying to delete a lock while it's held in shared mode must fail.
After ensuring the lock is held in exclusive mode, the lock will mark itself as deleted and continue to notify all pending acquires. They will wake up, notice the deleted lock and return an error to the caller.
Condition
Note: This is not necessary for the locking changes above, but it may be a good optimization (pending performance tests).
The existing locking code in Ganeti 2.0 uses Python's built-in
threading.Condition
class. Unfortunately Condition
implements
timeouts by sleeping 1ms to 20ms between tries to acquire the condition
lock in non-blocking mode. This requires unnecessary context switches
and contention on the CPython GIL (Global Interpreter Lock).
By using POSIX pipes (see pipe(2)
) we can use the operating system's
support for timeouts on file descriptors (see select(2)
). A custom
condition class will have to be written for this.
On instantiation the class creates a pipe. After each notification the previous pipe is abandoned and re-created (technically the old pipe needs to stay around until all notifications have been delivered).
All waiting clients of the condition use select(2)
or poll(2)
to
wait for notifications, optionally with a timeout. A notification will
be signalled to the waiting clients by closing the pipe. If the pipe
wasn't closed during the timeout, the waiting function returns to its
caller nonetheless.
Node daemon availability
Current State and shortcomings
Currently, when a Ganeti node suffers serious system disk damage, the
migration/failover of an instance may not correctly shutdown the virtual
machine on the broken node causing instances duplication. The gnt-node
powercycle
command can be used to force a node reboot and thus to
avoid duplicated instances. This command relies on node daemon
availability, though, and thus can fail if the node daemon has some
pages swapped out of ram, for example.
Proposed changes
The proposed solution forces node daemon to run exclusively in RAM. It
uses python ctypes to to call mlockall(MCL_CURRENT | MCL_FUTURE)
on
the node daemon process and all its children. In addition another log
handler has been implemented for node daemon to redirect to
/dev/console
messages that cannot be written on the logfile.
With these changes node daemon can successfully run basic tasks such as a powercycle request even when the system disk is heavily damaged and reading/writing to disk fails constantly.
New Features
Automated Ganeti Cluster Merger
Current situation
Currently there's no easy way to merge two or more clusters together. But in order to optimize resources this is a needed missing piece. The goal of this design doc is to come up with a easy to use solution which allows you to merge two or more cluster together.
Initial contact
As the design of Ganeti is based on an autonomous system, Ganeti by
itself has no way to reach nodes outside of its cluster. To overcome
this situation we're required to prepare the cluster before we can go
ahead with the actual merge: We've to replace at least the ssh keys on
the affected nodes before we can do any operation within gnt-
commands.
To make this a automated process we'll ask the user to provide us with
the root password of every cluster we've to merge. We use the password
to grab the current id_dsa
key and then rely on that ssh key for any
further communication to be made until the cluster is fully merged.
Cluster merge
After initial contact we do the cluster merge:
- Grab the list of nodes
- On all nodes add our own
id_dsa.pub
key toauthorized_keys
- Stop all instances running on the merging cluster
- Disable
ganeti-watcher
as it tries to restart Ganeti daemons - Stop all Ganeti daemons on all merging nodes
- Grab the
config.data
from the master of the merging cluster - Stop local
ganeti-masterd
- Merge the config:
- Open our own cluster
config.data
- Open cluster
config.data
of the merging cluster - Grab all nodes of the merging cluster
- Set
master_candidate
to false on all merging nodes - Add the nodes to our own cluster
config.data
- Grab all the instances on the merging cluster
- Adjust the port if the instance has drbd layout:
- In
logical_id
(index 2) - In
physical_id
(index 1 and 3)
- In
- Add the instances to our own cluster
config.data
- Open our own cluster
- Start
ganeti-masterd
with--no-voting
--yes-do-it
-
gnt-node add --readd
on all merging nodes gnt-cluster redist-conf
- Restart
ganeti-masterd
normally - Enable
ganeti-watcher
again - Start all merging instances again
Rollback
Until we actually (re)add any nodes we can abort and rollback the merge
at any point. After merging the config, though, we've to get the backup
copy of config.data
(from another master candidate node). And for
security reasons it's a good idea to undo id_dsa.pub
distribution by
going on every affected node and remove the id_dsa.pub
key again.
Also we've to keep in mind, that we've to start the Ganeti daemons and
starting up the instances again.
Verification
Last but not least we should verify that the merge was successful.
Therefore we run gnt-cluster verify
, which ensures that the cluster
overall is in a healthy state. Additional it's also possible to compare
the list of instances/nodes with a list made prior to the upgrade to
make sure we didn't lose any data/instance/node.
Appendix
cluster-merge.py
Used to merge the cluster config. This is a POC and might differ from actual production code.
#!/usr/bin/python
import sys
from ganeti import config
from ganeti import constants
c_mine = config.ConfigWriter(offline=True)
c_other = config.ConfigWriter(sys.argv[1])
fake_id = 0
for node in c_other.GetNodeList():
node_info = c_other.GetNodeInfo(node)
node_info.master_candidate = False
c_mine.AddNode(node_info, str(fake_id))
fake_id += 1
for instance in c_other.GetInstanceList():
instance_info = c_other.GetInstanceInfo(instance)
for dsk in instance_info.disks:
if dsk.dev_type in constants.LDS_DRBD:
port = c_mine.AllocatePort()
logical_id = list(dsk.logical_id)
logical_id[2] = port
dsk.logical_id = tuple(logical_id)
physical_id = list(dsk.physical_id)
physical_id[1] = physical_id[3] = port
dsk.physical_id = tuple(physical_id)
c_mine.AddInstance(instance_info, str(fake_id))
fake_id += 1
Feature changes
Ganeti Confd
Current State and shortcomings
In Ganeti 2.0 all nodes are equal, but some are more equal than others. In particular they are divided between "master", "master candidates" and "normal". (Moreover they can be offline or drained, but this is not important for the current discussion). In general the whole configuration is only replicated to master candidates, and some partial information is spread to all nodes via ssconf.
This change was done so that the most frequent Ganeti operations didn't need to contact all nodes, and so clusters could become bigger. If we want more information to be available on all nodes, we need to add more ssconf values, which is counter-balancing the change, or to talk with the master node, which is not designed to happen now, and requires its availability.
Information such as the instance->primary_node mapping will be needed on all nodes, and we also want to make sure services external to the cluster can query this information as well. This information must be available at all times, so we can't query it through RAPI, which would be a single point of failure, as it's only available on the master.
Proposed changes
In order to allow fast and highly available access read-only to some configuration values, we'll create a new ganeti-confd daemon, which will run on master candidates. This daemon will talk via UDP, and authenticate messages using HMAC with a cluster-wide shared key. This key will be generated at cluster init time, and stored on the clusters alongside the ganeti SSL keys, and readable only by root.
An interested client can query a value by making a request to a subset of the cluster master candidates. It will then wait to get a few responses, and use the one with the highest configuration serial number. Since the configuration serial number is increased each time the ganeti config is updated, and the serial number is included in all answers, this can be used to make sure to use the most recent answer, in case some master candidates are stale or in the middle of a configuration update.
In order to prevent replay attacks queries will contain the current unix timestamp according to the client, and the server will verify that its timestamp is in the same 5 minutes range (this requires synchronized clocks, which is a good idea anyway). Queries will also contain a "salt" which they expect the answers to be sent with, and clients are supposed to accept only answers which contain salt generated by them.
The configuration daemon will be able to answer simple queries such as:
- master candidates list
- master node
- offline nodes
- instance list
- instance primary nodes
Wire protocol
A confd query will look like this, on the wire:
plj0{
"msg": "{\"type\": 1,
\"rsalt\": \"9aa6ce92-8336-11de-af38-001d093e835f\",
\"protocol\": 1,
\"query\": \"node1.example.com\"}\n",
"salt": "1249637704",
"hmac": "4a4139b2c3c5921f7e439469a0a45ad200aead0f"
}
"plj0" is a fourcc that details the message content. It stands for plain json 0, and can be changed as we move on to different type of protocols (for example protocol buffers, or encrypted json). What follows is a json encoded string, with the following fields:
- 'msg' contains a JSON-encoded query, its fields are:
- 'protocol', integer, is the confd protocol version (initially just constants.CONFD_PROTOCOL_VERSION, with a value of 1)
- 'type', integer, is the query type. For example "node role by name" or "node primary ip by instance ip". Constants will be provided for the actual available query types.
- 'query', string, is the search key. For example an ip, or a node name.
- 'rsalt', string, is the required response salt. The client must use it to recognize which answer it's getting.
- 'salt' must be the current unix timestamp, according to the client. Servers can refuse messages which have a wrong timing, according to their configuration and clock.
- 'hmac' is an hmac signature of salt+msg, with the cluster hmac key
If an answer comes back (which is optional, since confd works over UDP) it will be in this format:
plj0{
"msg": "{\"status\": 0,
\"answer\": 0,
\"serial\": 42,
\"protocol\": 1}\n",
"salt": "9aa6ce92-8336-11de-af38-001d093e835f",
"hmac": "aaeccc0dff9328fdf7967cb600b6a80a6a9332af"
}
Where:
- 'plj0' the message type magic fourcc, as discussed above
- 'msg' contains a JSON-encoded answer, its fields are:
- 'protocol', integer, is the confd protocol version (initially just constants.CONFD_PROTOCOL_VERSION, with a value of 1)
- 'status', integer, is the error code. Initially just 0 for 'ok' or '1' for 'error' (in which case answer contains an error detail, rather than an answer), but in the future it may be expanded to have more meanings (eg: 2, the answer is compressed)
- 'answer', is the actual answer. Its type and meaning is query specific. For example for "node primary ip by instance ip" queries it will be a string containing an IP address, for "node role by name" queries it will be an integer which encodes the role (master, candidate, drained, offline) according to constants.
- 'salt' is the requested salt from the query. A client can use it to recognize what query the answer is answering.
- 'hmac' is an hmac signature of salt+msg, with the cluster hmac key
Redistribute Config
Current State and shortcomings
Currently LURedistributeConfig triggers a copy of the updated configuration file to all master candidates and of the ssconf files to all nodes. There are other files which are maintained manually but which are important to keep in sync. These are:
- rapi SSL key certificate file (rapi.pem) (on master candidates)
- rapi user/password file rapi_users (on master candidates)
Furthermore there are some files which are hypervisor specific but we may want to keep in sync:
- the xen-hvm hypervisor uses one shared file for all vnc passwords, and copies the file once, during node add. This design is subject to revision to be able to have different passwords for different groups of instances via the use of hypervisor parameters, and to allow xen-hvm and kvm to use an equal system to provide password-protected vnc sessions. In general, though, it would be useful if the vnc password files were copied as well, to avoid unwanted vnc password changes on instance failover/migrate.
Optionally the admin may want to also ship files such as the global xend.conf file, and the network scripts to all nodes.
Proposed changes
RedistributeConfig will be changed to copy also the rapi files, and to call every enabled hypervisor asking for a list of additional files to copy. Users will have the possibility to populate a file containing a list of files to be distributed; this file will be propagated as well. Such solution is really simple to implement and it's easily usable by scripts.
This code will be also shared (via tasklets or by other means, if tasklets are not ready for 2.1) with the AddNode and SetNodeParams LUs (so that the relevant files will be automatically shipped to new master candidates as they are set).
VNC Console Password
Current State and shortcomings
Currently just the xen-hvm hypervisor supports setting a password to connect the the instances' VNC console, and has one common password stored in a file.
This doesn't allow different passwords for different instances/groups of instances, and makes it necessary to remember to copy the file around the cluster when the password changes.