- May 21, 2013
-
-
Helga Velroyen authored
This replaces the field 'vg_names' in the RPC call of 'node info' by 'storage_units'. A storage unit is a tuple <storage_type,key> and a generalization of a vg_name. The list of vg names is replaced by a list of storage units. The modified RPC call will be used to report storage space for more than just lvm volume groups. What the 'key' is depends on the storage type. For storage type lvm-vg, the key is the volume group name. To keep backward compatibility, all functions that use the old vg_names, convert them to a list where every volume group is mapped to a tuple [('lvm-vg',volume_group)] before making the call. Signed-off-by:
Helga Velroyen <helgav@google.com> Reviewed-by:
Bernardo Dal Seno <bdalseno@google.com>
-
- May 14, 2013
-
-
Michele Tartara authored
The Haskell ConfD client was assuming internet addresses to be IPv4. This patch modifies the client so that it is able to automatically detect the protocol it should use by analyzing the address it is told to connect to. Signed-off-by:
Michele Tartara <mtartara@google.com> Reviewed-by:
Helga Velroyen <helgav@google.com>
-
Michele Tartara authored
This function can be useful to many parts of the code to convert the string representation of an IP (v4 or v6) address into the proper data type. Signed-off-by:
Michele Tartara <mtartara@google.com> Reviewed-by:
Helga Velroyen <helgav@google.com>
-
- May 13, 2013
-
-
Michele Tartara authored
Enable the monitoring daemon to invoke the Xen instance status data collector. Signed-off-by:
Michele Tartara <mtartara@google.com> Reviewed-by:
Klaus Aehlig <aehlig@google.com>
-
Michele Tartara authored
It will need to be accessed by the monitoring daemon. Signed-off-by:
Michele Tartara <mtartara@google.com> Reviewed-by:
Klaus Aehlig <aehlig@google.com>
-
Michele Tartara authored
The global status is computed from the statuses of the single instances. The output json format is adapted to include this piece of information, as prescribed by the design document. Signed-off-by:
Michele Tartara <mtartara@google.com> Reviewed-by:
Klaus Aehlig <aehlig@google.com>
-
Michele Tartara authored
It will be used by multiple data collectors, not only the DRBD collector. Signed-off-by:
Michele Tartara <mtartara@google.com> Reviewed-by:
Klaus Aehlig <aehlig@google.com>
-
Michele Tartara authored
Instead of manually specify the name of the data collectors in mon-collector, just use the dcName field each of them exports. Signed-off-by:
Michele Tartara <mtartara@google.com> Reviewed-by:
Klaus Aehlig <aehlig@google.com>
-
Michele Tartara authored
Instead of building the report as part of the "Main" function, have it built by its own dedicated function, so that it will be able to export it directly to the monitoring daemon when needed. Signed-off-by:
Michele Tartara <mtartara@google.com> Reviewed-by:
Klaus Aehlig <aehlig@google.com>
-
Michele Tartara authored
Name, version, format version, category and kind of the Instance Status data collector are now exported. Signed-off-by:
Michele Tartara <mtartara@google.com> Reviewed-by:
Klaus Aehlig <aehlig@google.com>
-
Michele Tartara authored
Fetch the reason trail from file, failing gracefully if it is not found, and include it in the output of the instance status data collector. Signed-off-by:
Michele Tartara <mtartara@google.com> Reviewed-by:
Klaus Aehlig <aehlig@google.com>
-
Michele Tartara authored
Added function for determining whether the status of an instance is ok, and to represent this information in the corresponding field in the report. Signed-off-by:
Michele Tartara <mtartara@google.com> Reviewed-by:
Klaus Aehlig <aehlig@google.com>
-
Michele Tartara authored
Compute the actual state of the instance and export it. Signed-off-by:
Michele Tartara <mtartara@google.com> Reviewed-by:
Klaus Aehlig <aehlig@google.com>
-
Michele Tartara authored
Add the Xen instance status data collector with only its core features. The next commits will add more reporting functionalities. The access to the collector is made possible through the mon-collector tool. Signed-off-by:
Michele Tartara <mtartara@google.com> Reviewed-by:
Klaus Aehlig <aehlig@google.com>
-
Michele Tartara authored
The Xen instance status data collector will require to get some information from the hypervisor. This commit introduces a module providing such functions. Signed-off-by:
Michele Tartara <mtartara@google.com> Reviewed-by:
Thomas Thrainer <thomasth@google.com>
-
Michele Tartara authored
The getInstReasonFilename is built to resemble the python corresponding function. Signed-off-by:
Michele Tartara <mtartara@google.com> Reviewed-by:
Klaus Aehlig <aehlig@google.com>
-
- May 10, 2013
-
-
Klaus Aehlig authored
Make hroller take into account the nodes (redundant) instances will be migrated to. This be behavior can be overridden by the --offline-maintenance option which will make hroller plan under the assumption that all instances will be shutdown before starting with the rolling reboots. Signed-off-by:
Klaus Aehlig <aehlig@google.com> Reviewed-by:
Thomas Thrainer <thomasth@google.com>
-
Klaus Aehlig authored
For online rolling reboots, there are two kind of restrictions. First, we cannot reboot the primary and secondary nodes of an instance together. Secondly, two nodes cannot be rebooted simultaneously, if they are the primary nodes of two instances with the same secondary node. The second condition requires knowledge of all nodes, not only those the graph is to be constructed on. Signed-off-by:
Klaus Aehlig <aehlig@google.com> Reviewed-by:
Thomas Thrainer <thomasth@google.com>
-
Klaus Aehlig authored
Add a new option to hroller to only output information about the first reboot group. Together with the option --node-tags this allows for the following work flow. First tag all nodes; then repeatedly compute the first node group, handle these nodes and remove the tags. In between these steps, other operations can be carried out on the cluster. Signed-off-by:
Klaus Aehlig <aehlig@google.com> Reviewed-by:
Thomas Thrainer <thomasth@google.com>
-
Klaus Aehlig authored
Make hroller output the node groups not containing the master node sorted by size, largest group first. The master node still remains the last node of the last reboot group. In this way, most progress is made when switching back to normal cluster operations after the first reboot group. Signed-off-by:
Klaus Aehlig <aehlig@google.com> Reviewed-by:
Thomas Thrainer <thomasth@google.com>
-
- May 07, 2013
-
-
Klaus Aehlig authored
Signed-off-by:
Klaus Aehlig <aehlig@google.com> Reviewed-by:
Guido Trotter <ultrotter@google.com>
-
Klaus Aehlig authored
Add option --node-tags to tell hroller to consider only nodes with these tags. A use case would be a tag tracking on which nodes the maintenance has not yet been carried out, e.g., if rolling reboots are interleaved with other cluster operations. Signed-off-by:
Klaus Aehlig <aehlig@google.com> Reviewed-by:
Guido Trotter <ultrotter@google.com>
-
Klaus Aehlig authored
Since the htools representation of a node now allows adding the node tags, populate this field correctly in the Rapi backend. Signed-off-by:
Klaus Aehlig <aehlig@google.com> Reviewed-by:
Guido Trotter <ultrotter@google.com>
-
Klaus Aehlig authored
Since the htools representation of a node now allows adding the node tags, populate this field correctly in the LUXI backend. Signed-off-by:
Klaus Aehlig <aehlig@google.com> Reviewed-by:
Guido Trotter <ultrotter@google.com>
-
Klaus Aehlig authored
In order to allow htools to make use of node tags, add them to the text format. This is done by adding a new column at the end of the node lines. If this column is missing, the default value (which is the empty list) is left unchanged, thus yielding the current behavior. Signed-off-by:
Klaus Aehlig <aehlig@google.com> Reviewed-by:
Guido Trotter <ultrotter@google.com>
-
Klaus Aehlig authored
Since hroller (and probably other tools in the future) will support node selection based on node tags, extend the node data structure to allow adding this information. Signed-off-by:
Klaus Aehlig <aehlig@google.com> Reviewed-by:
Guido Trotter <ultrotter@google.com>
-
Klaus Aehlig authored
Hroller used to first compute a coloring of the node graph and then filter out the nodes that it had to work on. While the only filtering was according to node groups this did not make a difference, as there shouldn't be any instance with primary and secondary node on different node groups. With more elaborate filtering, however, reducing the graph first can lead to better reboot groups. Signed-off-by:
Klaus Aehlig <aehlig@google.com> Reviewed-by:
Guido Trotter <ultrotter@google.com>
-
Klaus Aehlig authored
Change the behavior of mkNodeGraph to tacitly ignore all instances where one of the nodes is not in the list of nodes. In this way, we can construct sub-graphs by filtering the nodes and ignoring any possibly added isolated nodes for the missing indexes. Signed-off-by:
Klaus Aehlig <aehlig@google.com> Reviewed-by:
Guido Trotter <ultrotter@google.com>
-
- Apr 30, 2013
-
-
Michele Tartara authored
The haskell type definition of opcodes should remain aligned with the python one. Signed-off-by:
Michele Tartara <mtartara@google.com> Reviewed-by:
Helga Velroyen <helgav@google.com>
-
Michele Tartara authored
It will be added to the haskell definition of opcodes, to keep it aligned to the python one, and it will be used for fetching the reason trail by the instance status data collector. Signed-off-by:
Michele Tartara <mtartara@google.com> Reviewed-by:
Helga Velroyen <helgav@google.com>
-
Michele Tartara authored
Signed-off-by:
Michele Tartara <mtartara@google.com> Reviewed-by:
Helga Velroyen <helgav@google.com>
-
Michele Tartara authored
Produce a personalized 404 error when the requested resource is not available. Signed-off-by:
Michele Tartara <mtartara@google.com> Reviewed-by:
Helga Velroyen <helgav@google.com>
-
Michele Tartara authored
Implement the API function of the monitoring daemon that provides the report of all the data collectors. Signed-off-by:
Michele Tartara <mtartara@google.com> Reviewed-by:
Helga Velroyen <helgav@google.com>
-
Michele Tartara authored
Allow to ask the monitoring daemon for the report of one specific data collector. Signed-off-by:
Michele Tartara <mtartara@google.com> Reviewed-by:
Helga Velroyen <helgav@google.com>
-
Michele Tartara authored
Export the full report instead of just the data from the DRBD data collector. Signed-off-by:
Michele Tartara <mtartara@google.com> Reviewed-by:
Helga Velroyen <helgav@google.com>
-
Michele Tartara authored
Change the JSON serialization for the "category" field of data collectors, in accordance to the latest version of the design document. Signed-off-by:
Michele Tartara <mtartara@google.com> Reviewed-by:
Helga Velroyen <helgav@google.com>
-
Michele Tartara authored
Implement the handler for outputting the list of collectors (name, category, kind) in JSON format. Signed-off-by:
Michele Tartara <mtartara@google.com> Reviewed-by:
Helga Velroyen <helgav@google.com>
-
Iustin Pop authored
Since we use the primitive string type for group UUIDs, the group fields have a bug where we pass the group name as filter for node tests, whereas the nodes themselves use the group UUID. This results in zero node count, empty node list, and no instances being reported as assigned to groups. The patch fixes this and adds a test for the node count. It does some test generation improvement, and also cleans up whitespace issues in Test/G/Q/Query.hs (the functions case_queryNode_allfields, prop_queryGroup_noUnknown and case_queryGroup_allfields are unchanged but simply have indentation fixed). Signed-off-by:
Iustin Pop <iustin@google.com> Reviewed-by:
Guido Trotter <ultrotter@google.com> Cherry-pick of e7124835, fixes issue 436 Signed-off-by:
Klaus Aehlig <aehlig@google.com> Reviewed-by:
Guido Trotter <ultrotter@google.com> Conflicts: test/hs/Test/Ganeti/Objects.hs test/hs/Test/Ganeti/Query/Query.hs
-
- Apr 29, 2013
-
-
Bernardo Dal Seno authored
With tiered allocation, hspace uses all the max specs in turn as the initial instance spec. Signed-off-by:
Bernardo Dal Seno <bdalseno@google.com> Reviewed-by:
Helga Velroyen <helgav@google.com>
-
Bernardo Dal Seno authored
Now instance policies can contain more than one min/max specs. This is the main element of the "Constrained instance sizes" section in the "Partitioned Ganeti" design doc. This is a big patch, but changing the type of a configuration item requires to change all the code that handles it. Signed-off-by:
Bernardo Dal Seno <bdalseno@google.com> Reviewed-by:
Helga Velroyen <helgav@google.com>
-