- Oct 20, 2008
-
-
Iustin Pop authored
Currently the gnt-* scripts are using a mix of print, logger.ToStd* and sys.stderr.write. We convert them all to using cli.ToStdout/err. This way, we can easily change the implementation for all at once. Reviewed-by: imsnah
-
Iustin Pop authored
The gnt-instance modify didn't work correctly w.r.t the be parameters. There was also a typo in the corresponding LU. Reviewed-by: imsnah
-
Alexander Schreiber authored
We no longer use a single, cluster-wide hypervisor, but configure the actual to be used hypervisor on the instance level. Reviewed-by: imsnah
-
- Oct 18, 2008
-
-
Alexander Schreiber authored
Reviewed-by: iustinp
-
- Oct 16, 2008
-
-
Iustin Pop authored
This patch enables the cluster modify to change: - enabled hypervisor list - hvparams (per hypervisor) - beparams (only the default group) Syntax: gnt-cluster modify -B vcpus=3 -H xen-pvm:no_initrd_path Validation for parameters is somewhat missing - the individual hypervisors will be checked for syntax and validation, but beparams doesn't have validation yes (nowhere), it should be added here once we have a global method (will come soon). Reviewed-by: imsnah
-
Iustin Pop authored
This is just a raw update without any special formatting. Reviewed-by: imsnah
-
Iustin Pop authored
This adds the set/reset in the jqueue and luxi modules, and a way to query it in OpQueryConfigValues, and also the comand line interface for it: $ gnt-cluster queue info The drain flag is unset $ gnt-cluster queue drain $ gnt-cluster queue info The drain flag is set $ gnt-cluster queue undrain $ gnt-cluster queue info The drain flag is unset The choice of making the setting via luxi and not an opcode is that opcodes can't be executed when drained, but we don't query via luxi since in the future it might become a cluster property as opposed to a node one. Reviewed-by: imsnah
-
- Oct 15, 2008
-
-
René Nussbaumer authored
This change is part of the integration of tools/batcher from Ganeti 1.2 into Ganeti 2.0 core code. It has a compatible submission interface to the version from 1.2 with these differences: * it's integrated into Ganeti directly: gnt-instance batch-create * iallocator is now provided by the instance specs * log is now held within Ganeti * no force needed * no sleep needed * memory and vcpus goes now into beparams * missing stuff from the TODO (see below) Open TODOs: * Implement notification about creation status/interactive part (top like) * Backup instance allocation specs? * Document usage and spec format Reviewed-by: iustinp
-
- Oct 14, 2008
-
-
Guido Trotter authored
- add backend and hypervisor parameters - fix beparams validation/passing - pass hypervisor and hvparams - remove deprecated flags Reviewed-by: iustinp
-
Guido Trotter authored
ValidateBeParams does not return, but its return value is assigned to a variable which is never used. Avoid this assignment. Reviewed-by: iustinp
-
Guido Trotter authored
kernel, initrd, hvm_boot_order and vnc_bind_address are now hypervisor parameters and should not have their own flag. Moreover querying of vnc_bind_address should of course pass through the hv/ namespace. Reviewed-by: iustinp
-
Iustin Pop authored
The patch adds a new ‘--no-wait-for-sync’ parameter to grow-disk similar to the one in instance add, and changes the default to wait. This is cleaner as at the moment when the command returns, we either have a fully synced disk or there is an error. This is a forward-port of rev 1183 on the 1.2 branch. Reviewed-by: ultrotter
-
Iustin Pop authored
Change the constant name to match the value (autobalance -> auto_balance). Also add the auto_balance header so that gnt-instance can list it. Reviewed-by: ultrotter
-
Iustin Pop authored
This big patch changes the master code to use the beparams. Errors might have crept in, but it passes a small burnin. Reviewed-by: ultrotter
-
Iustin Pop authored
This patch adds a new '-s' parameter to ‘gnt-instance info’ that makes it return only 'static' information. This is much faster, especially for drbd instances. This is a forward-port of rev 1570 on the ganeti-1.2 branch, resending due to some conflicts. Reviewed-by: imsnah
-
Iustin Pop authored
Some informations are not printed nicely (e.g. “virtual CDROM: False”), but this is the first step. Reviewed-by: imsnah
-
Iustin Pop authored
Reviewed-by: imsnah
-
Iustin Pop authored
This is just a change of the various hvm_ and pvm parameters to the hv model. Parameters are queried via hv/$name or via the whole dict as returned by hvparams. Reviewed-by: ultrotter,imsnah
-
Iustin Pop authored
This big patch changes instance create to the new hvparams structure. Old parameters are removed, so old jobs or old instances file will break current clusters. Reviewed-by: ultrotter
-
- Oct 08, 2008
-
-
Alexander Schreiber authored
Reviewed-by: ultrotter
-
Iustin Pop authored
This (big) patch moves the hypervisor type from the cluster to the instance level; the cluster attribute remains as the default hypervisor, and will be renamed accordingly in a next patch. The cluster also gains the ‘enable_hypervisors’ attribute, and instances can be created with any of the enabled ones (no provision yet for changing that attribute). The many many changes in the rpc/backend layer are due to the fact that all backend code read the hypervisor from the local copy of the config, and now we have to send it (either in the instance object, or as a separate parameter) for each function. The node list by default will list the node free/total memory for the default hypervisor, a new flag to it should exist to select another hypervisor. Instance list has a new field, hypervisor, that shows the instance hypervisor. Cluster verify runs for all enabled hypervisor types. The new FIXMEs are related to IAllocator, since now the node total/free/used memory counts are wrong (we can't reliably compute the free memory). Reviewed-by: imsnah
-
- Oct 07, 2008
-
-
Alexander Schreiber authored
Merged r1777 from branches/ganeti/ganeti-1.2 Reviewed-by: imsnah
-
Iustin Pop authored
Background: when we have multiple jobs in the queue (more than just a few), many of the jobs (up to the number of threads) will be in state 'running', although many of them could be actually blocked, waiting for some locks. This is not good, as one cannot easily see what is happening. The patch extends the opcode/job possible statuses with another one, waiting, which shows that the LU is in the acquire locks phase. The mechanism for doing so is simple, we initialize (in the job queue) the opcode with OP_STATUS_WAITLOCK, and when the processor is ready to give control to the LU's Exec, it will call a notifier back into the _JobQueueWorker that sets the opcode status to OP_STATUS_RUNNING (with the proper queue locking). Because this mechanism does not save the job, all opcodes on disk will be in status WAITLOCK and not RUNNING anymore, so we also change the load sequence to consider WAITLOCK as RUNNING. With the patch applied, creating in parallel (via burnin) five instances on a five node cluster shows that only two are executing, while three are waiting for locks. Reviewed-by: imsnah
-
- Oct 06, 2008
-
-
Iustin Pop authored
This patch adds a new luxi call that implements auto-archiving of jobs older than a certain age (or -1 for all completed jobs), and the gnt-job command that makes use of this (with 'all' for -1). Reviewed-by: imsnah
-
Iustin Pop authored
Currently the SshRunner uses a SimpleConfigReader instance, however this is not best. We change it to use the cluster name directly (and its constructor now takes this as parameter, instead of SCR), and its callers are change to pass the name directly. As a consequence, we can now remove the initialization of SCR in gnt-cluster (copyfile and command), and instead we query the master for the cluster name). Reviewed-by: imsnah
-
- Oct 01, 2008
-
-
Michael Hanselmann authored
Replace ssconf with configuration. Reviewed-by: iustinp
-
Michael Hanselmann authored
Get rid of ssconf and convert to configuration instead. Reviewed-by: iustinp
-
- Sep 30, 2008
-
-
Iustin Pop authored
I didn't realize that my zip will break when no args are passed... Reviewed-by: imsnah
-
Iustin Pop authored
This patch adds posibility of selection of job/opcode timestamps in gnt-job list and info. The code handling the possible cases (None or a valid timestamps) are ugly though... Reviwed-by: imsnah
-
Iustin Pop authored
Currently we format the timestamp inside the gnt-job info function. We will need this more times in the future, so move it to cli.py as a separate, exported function. Reviewed-by: imsnah
-
- Sep 29, 2008
-
-
Iustin Pop authored
This patch adds the job execution log in “gnt-job info” and also allows its selection in “gnt-job list” (however here it's not very useful as it's not easy to parse). It does this by adding a new field in the query job call, named ‘oplog’. With this, one can get a very clear examination of the job. What remains to be added would be timestamps for start/stop of the processing for the job itself and its opcodes. Reviewed-by: imsnah
-
Iustin Pop authored
Currently, it is hard to examine a job in detail; the output of ‘gnt-job list’ is not easy to parse. The patch adds a ‘gnt-job info’ command that is (vaguely) similar to ‘gnt-instance info’ in that it shows in a somewhat easy to understand format the details of a job. The result formatter is the most complicated part, since the results are not standardized; the code attempts to format nicely the most common result types (as taken from a random job list), via a generic algorithm. Reviewed-by: imsnah
-
Iustin Pop authored
It is not currently possibly to show a summary of the job in the output of “gnt-job list”. The closes is listing the whole opcode(s), but that is too verbose. Also, the default output (id, status) is not very useful, unless one looks for (and knows about) an exact job ID. The patch adds a “summary” description of a job composed of the list of OP_ID of the individual opcodes. Moreover, if an opcode has a ‘logical’ target in a certain opcode field (e.g. start instance has the instance name as the target), then it is included in the formatting also. It's easier to explain via a sample output: gnt-job list ID Status Summary 1 error NODE_QUERY 2 success NODE_ADD(gnta2) 3 success CLUSTER_QUERY 4 success NODE_REMOVE(gnta2.example.com) 5 error NODE_QUERY 6 success NODE_ADD(gnta2) 7 success NODE_QUERY 8 success OS_DIAGNOSE 9 success INSTANCE_CREATE(instance1.example.com) 10 success INSTANCE_REMOVE(instance1.example.com) 11 error INSTANCE_CREATE(instance1.example.com) 12 success INSTANCE_CREATE(instance1.example.com) 13 success INSTANCE_SHUTDOWN(instance1.example.com) 14 success INSTANCE_ACTIVATE_DISKS(instance1.example.com) 15 error INSTANCE_CREATE(instance2.example.com) 16 error INSTANCE_CREATE(instance2.example.com) 17 success INSTANCE_CREATE(instance2.example.com) 18 success INSTANCE_ACTIVATE_DISKS(instance1.example.com) 19 success INSTANCE_ACTIVATE_DISKS(instance2.example.com) 20 success INSTANCE_SHUTDOWN(instance1.example.com) 21 success INSTANCE_SHUTDOWN(instance2.example.com) This is done by a simple change to the opcode classes, which allows an opcode to format itself. The additional function is small enough that it can go in opcodes.py, where it could also be used by a client if needed. Reviewed-by: imsnah
-
- Sep 27, 2008
-
-
Iustin Pop authored
This patch adds listing of the serial_no attribute in gnt-instance and gnt-node list, and updates to the manpages to reflect the change. Reviewed-by: ultrotter
-
- Sep 11, 2008
-
-
Guido Trotter authored
It used to refer to "nodes", which was confusing. Reviewed-by: iustinp
-
Iustin Pop authored
Finish the --submit changes with these two, which (because they are multi-opcode commands) require special handling. Reviewed-by: ultrotter
-
- Sep 10, 2008
-
-
Iustin Pop authored
In the gnt-instance script, _ExpandNames() uses jobs to query instance names. This is not optimal, so we change it to use queries. Reviewed-by: ultrotter
-
Iustin Pop authored
This patch adds support for the “--submit” parameter in the gnt-instance script, for the commands where it makes sense. Reviewed-by: ultrotter
-
- Sep 08, 2008
-
-
Guido Trotter authored
Fixing the check in gnt-cluster, or gnt-cluster verify-disks is broken. Since the version in 1.2 used to return a tuple we'll accept both. Reviewed-by: iustinp
-
- Sep 02, 2008
-
-
Alexander Schreiber authored
Implement more options for gnt-backup import Reviewed-by: ultrotter
-