- 13 Jan, 2014 6 commits
-
-
Jose A. Lopes authored
Add user shutdown parameter for KVM. Based on this parameter, decide what information to report for a KVM instance, for example, distinguish between 'ADMIN_down' and 'USER_down'. Signed-off-by:
Jose A. Lopes <jabolopes@google.com> Reviewed-by:
Michele Tartara <mtartara@google.com>
-
Jose A. Lopes authored
Add KVM daemon logic, which contains monitors for Qmp sockets and directory/file watching. Signed-off-by:
Jose A. Lopes <jabolopes@google.com> Reviewed-by:
Michele Tartara <mtartara@google.com>
-
Jose A. Lopes authored
Refactor module 'Ganeti.UDSServer' so the KVM daemon can reuse code declared in this module to handle Unix domain sockets. Signed-off-by:
Jose A. Lopes <jabolopes@google.com> Reviewed-by:
Michele Tartara <mtartara@google.com>
-
Jose A. Lopes authored
* add user and group to 'configure.ac', 'Makefile.am' and 'AutoConf.hs.in' * extend 'Daemon' datatype with 'GanetiKvmd' and implement all related functions, such as, 'daemonName', etc. * export KVM daemon name as constant Signed-off-by:
Jose A. Lopes <jabolopes@google.com> Reviewed-by:
Michele Tartara <mtartara@google.com>
-
Jose A. Lopes authored
Fix whitespace in several modules. Signed-off-by:
Jose A. Lopes <jabolopes@google.com> Reviewed-by:
Michele Tartara <mtartara@google.com>
-
Jose A. Lopes authored
Signed-off-by:
Jose A. Lopes <jabolopes@google.com> Reviewed-by:
Michele Tartara <mtartara@google.com>
-
- 08 Jan, 2014 1 commit
-
-
Klaus Aehlig authored
If the query fields don't require live data, we use the shortcut and don't request live data. However, we cannot take this shortcut if the fields the filter depends on requires live data. Signed-off-by:
Klaus Aehlig <aehlig@google.com> Reviewed-by:
Petr Pudlak <pudlak@google.com>
-
- 07 Jan, 2014 9 commits
-
-
Klaus Aehlig authored
Now that all jobs are monitored with inotify, increase the polling interval. Signed-off-by:
Klaus Aehlig <aehlig@google.com> Reviewed-by:
Helga Velroyen <helgav@google.com>
-
Klaus Aehlig authored
In order to obtain a higher throughput of jobs, schedule new jobs as soon as a job was detected to have finished. Signed-off-by:
Klaus Aehlig <aehlig@google.com> Reviewed-by:
Helga Velroyen <helgav@google.com>
-
Klaus Aehlig authored
Add a function that can serve as an event handler for inotify updating a job in the job queue if the corresponding job file changes. Also attach it to all jobs selected to be run. Signed-off-by:
Klaus Aehlig <aehlig@google.com> Reviewed-by:
Helga Velroyen <helgav@google.com>
-
Klaus Aehlig authored
When attaching inotifies to jobs, we need to preserve it through potential requeuing actions. Also, this information is needed for cleaning up. Signed-off-by:
Klaus Aehlig <aehlig@google.com> Reviewed-by:
Helga Velroyen <helgav@google.com>
-
Klaus Aehlig authored
When cleaning up finished jobs, remove the inotify attached to them, if any. Signed-off-by:
Klaus Aehlig <aehlig@google.com> Reviewed-by:
Helga Velroyen <helgav@google.com>
-
Klaus Aehlig authored
This provides the infrastructure to monitor running jobs by inotify, and hence update the queue promptly upon job changes. Signed-off-by:
Klaus Aehlig <aehlig@google.com> Reviewed-by:
Helga Velroyen <helgav@google.com>
-
Klaus Aehlig authored
Make luxid also handle queries to drain the job queue. Signed-off-by:
Klaus Aehlig <aehlig@google.com> Reviewed-by:
Guido Trotter <ultrotter@google.com>
-
Klaus Aehlig authored
As luxid is also responsible for handling requests to drain the job queue, we need the corresponding RPC in Haskell as well. Signed-off-by:
Klaus Aehlig <aehlig@google.com> Reviewed-by:
Guido Trotter <ultrotter@google.com>
-
Klaus Aehlig authored
The drain flag is set, if the queue is not open. Signed-off-by:
Klaus Aehlig <aehlig@google.com> Reviewed-by:
Guido Trotter <ultrotter@google.com>
-
- 20 Dec, 2013 9 commits
-
-
Klaus Aehlig authored
When watching a file, reinstantiate the inotify if notified of an event that removes the watch. Such events are likely to happen, as our usual way to "modify" a file is to atomically replace it by another one. Signed-off-by:
Klaus Aehlig <aehlig@google.com> Reviewed-by:
Helga Velroyen <helgav@google.com>
-
Klaus Aehlig authored
Also log, at debug level only, when a change of a watched file was observed, but the change did not result in any change of derived value. Signed-off-by:
Klaus Aehlig <aehlig@google.com> Reviewed-by:
Helga Velroyen <helgav@google.com>
-
Klaus Aehlig authored
At debug level, not only log that an inotify triggered, but also log the actual event. Signed-off-by:
Klaus Aehlig <aehlig@google.com> Reviewed-by:
Helga Velroyen <helgav@google.com>
-
Helga Velroyen authored
This patch adds a step to 'gnt-cluster verify' to verify the existence and validity of the nodes' client certificates. Since this is a crucial point of the security concept, the verification is very detailed with expressive error messages and well tested by unit tests. Signed-off-by:
Helga Velroyen <helgav@google.com> Reviewed-by:
Hrvoje Ribicic <riba@google.com>
-
Helga Velroyen authored
From this patch on, incoming RPC calls are checked against the map of valid master candidate certificates. If no map is present, the cluster is assumed to be in bootstrap/upgrade mode and compares the incoming call against the server certificate. This is necessary, because neither at cluster initialization nor at upgrades from pre-2.11 versions a candidate map is established yet. After an upgrade, the cluster RPC communication continues to use the server certificate until the client certificates are created and the candidate map is populated using 'gnt-cluster renew-crypto --new-node-certificates'. Note that for updating the master's certificate, a trick was necessary. The new certificate is first created under a temporary name, then it's digest is updated and distributed using the old certificate, because otherwise distribution will fail since the nodes don't know the new digest yet. Then the certificate is moved to its proper location. Signed-off-by:
Helga Velroyen <helgav@google.com> Reviewed-by:
Hrvoje Ribicic <riba@google.com>
-
Helga Velroyen authored
So far the RPC call 'node_crypto_tokens' did only retrieve the certificate digest of an existing certificate. This call is now enhanced to also create a new certificate and return the respective digest. This will be used in various operations, among those cluster init and renew-crypto. Signed-off-by:
Helga Velroyen <helgav@google.com> Reviewed-by:
Hrvoje Ribicic <riba@google.com>
-
Helga Velroyen authored
This patch enables Ganeti to store the candidate certificate map in ssconf. A utility function to read it is provided as well. Signed-off-by:
Helga Velroyen <helgav@google.com> Reviewed-by:
Hrvoje Ribicic <riba@google.com>
-
Helga Velroyen authored
At the end of this patch series, incoming RPC calls are legitimized against a map of master candidate nodes' SSL certificate digests. This patch adds the map itself to the cluster's configuration. Signed-off-by:
Helga Velroyen <helgav@google.com> Reviewed-by:
Hrvoje Ribicic <riba@google.com>
-
Helga Velroyen authored
In various cluster operations, the master node needs to retrieve the digest of a node's SSL certificate. For this purpose, we add an RPC call to retrieve the digest. The function is designed in a general way to make it possible to retrieve other (public) cryptographic tokens of a node in the future as well (for example an SSH key). Signed-off-by:
Helga Velroyen <helgav@google.com> Reviewed-by:
Hrvoje Ribicic <riba@google.com>
-
- 18 Dec, 2013 3 commits
-
-
Klaus Aehlig authored
Signed-off-by:
Klaus Aehlig <aehlig@google.com> Reviewed-by:
Helga Velroyen <helgav@google.com>
-
Klaus Aehlig authored
hsqueeze is supposed to tag nodes before powering them down, so that it later can recognize which nodes can be activated later. When showing the commands to execute, also add the tagging commands. Signed-off-by:
Klaus Aehlig <aehlig@google.com> Reviewed-by:
Helga Velroyen <helgav@google.com>
-
Klaus Aehlig authored
If an instance has a secondary node, it cannot be easily moved to every node (in the same node group), as otherwise no node would be distinguished as secondary. As hsqueeze should only consider nodes were moving the instances away is cheap, such nodes cannot be considered for being offlined. Signed-off-by:
Klaus Aehlig <aehlig@google.com> Reviewed-by:
Helga Velroyen <helgav@google.com>
-
- 17 Dec, 2013 9 commits
-
-
Santi Raffa authored
The shared file and gluster disk templates should not report their disk space information like file does, because they do not behave the same. If a cluster pulls from the same, shared source of storage then it is not correct, or useful, to report per-node disk availability, as the information is not node-specific. This change introduces the Shared File storage type for which no per-node measuring of disk resources is made. Signed-off-by:
Santi Raffa <rsanti@google.com> Signed-off-by:
Thomas Thrainer <thomasth@google.com> Reviewed-by:
Thomas Thrainer <thomasth@google.com>
-
Santi Raffa authored
Add support for the QEMU gluster: protocol. Also change the access mode routines so they check the access parameter for all templates. Signed-off-by:
Santi Raffa <rsanti@google.com> Signed-off-by:
Thomas Thrainer <thomasth@google.com> Reviewed-by:
Thomas Thrainer <thomasth@google.com>
-
Santi Raffa authored
Add parameters to the Gluster disk template so Gluster can manage the mount point point autonomously. Signed-off-by:
Santi Raffa <rsanti@google.com> Signed-off-by:
Thomas Thrainer <thomasth@google.com> Reviewed-by:
Thomas Thrainer <thomasth@google.com>
-
Santi Raffa authored
Gluster still does not mount anything autonomously, but this commit changes where Gluster expects its mountpoint to be. Signed-off-by:
Santi Raffa <rsanti@google.com> Signed-off-by:
Thomas Thrainer <thomasth@google.com> Reviewed-by:
Thomas Thrainer <thomasth@google.com>
-
Santi Raffa authored
This commit adds the gluster storage directory to ssconf (without actually using its value just yet). Signed-off-by:
Santi Raffa <rsanti@google.com> Signed-off-by:
Thomas Thrainer <thomasth@google.com> Reviewed-by:
Thomas Thrainer <thomasth@google.com>
-
Santi Raffa authored
Add Gluster to Ganeti by essentially cloning the shared file behaviour everywhere in the code base. Signed-off-by:
Santi Raffa <rsanti@google.com> Signed-off-by:
Thomas Thrainer <thomasth@google.com> Reviewed-by:
Thomas Thrainer <thomasth@google.com>
-
Klaus Aehlig authored
Support the query for the fields available for instances. Signed-off-by:
Klaus Aehlig <aehlig@google.com> Reviewed-by:
Petr Pudlak <pudlak@google.com>
-
Klaus Aehlig authored
...to be consistent with the python implementation. Signed-off-by:
Klaus Aehlig <aehlig@google.com> Reviewed-by:
Petr Pudlak <pudlak@google.com>
-
Klaus Aehlig authored
When asked for all fields, we promise to return the list of fields sorted according to niceSort. Keep this promise. Signed-off-by:
Klaus Aehlig <aehlig@google.com> Reviewed-by:
Petr Pudlak <pudlak@google.com>
-
- 16 Dec, 2013 1 commit
-
-
Klaus Aehlig authored
As the calling of watchFile and the evaluation of the initial getFStatSafe takes non-zero time, the file could have changed before inotify was set up properly. Solve this problem by an additional check for the watched value to have changed immediately after setup of inotify. Signed-off-by:
Klaus Aehlig <aehlig@google.com> Reviewed-by:
Petr Pudlak <pudlak@google.com>
-
- 13 Dec, 2013 2 commits
-
-
Petr Pudlak authored
Currently they are generated only as Strings. Signed-off-by:
Petr Pudlak <pudlak@google.com> Reviewed-by:
Jose A. Lopes <jabolopes@google.com>
-
Petr Pudlak authored
This greatly enhances code readability. Also fix monadic types "Q ExpQ" [which is "Q (Q Exp)"] to "Q Exp". Signed-off-by:
Petr Pudlak <pudlak@google.com> Reviewed-by:
Jose A. Lopes <jabolopes@google.com>
-