- Sep 01, 2008
-
-
Guido Trotter authored
It was already allowed in gnt-instance modify, but ignored. It will be used to force skipping parameter checks. This is a forward-port from branches/ganeti-1.2 Original-Reviewed-by: imsnah Reviewed-by: iustinp
-
- Aug 29, 2008
-
-
Alexander Schreiber authored
Add HVM device type flag 4/4 Reviewed-by: ultrotter
-
Alexander Schreiber authored
Add HVM device type flags 3/4 Reviewed-by: ultrotter
-
Alexander Schreiber authored
Add HVM device type flags 2/3 Reviewed-by: ultrotter
-
Michael Hanselmann authored
Reported by Iustin. It used to return this: >>> utils.SplitTime(1234.999999999999) (1234, 1000) while it should've returned this: >>> utils.SplitTime(1234.999999999999) (1235, 0) Reviewed-by: ultrotter
-
Alexander Schreiber authored
Add HVM device type flags 1/4 Reviewed-by: ultrotter
-
Alexander Schreiber authored
doc fix: Describe default values for HVM instance options & cleanup. Reviewed-by: iustinp
-
Alexander Schreiber authored
Clarify cluster IP requirement. Reviewed-by: iustinp
-
Iustin Pop authored
This patch alters the WaitForJobChanges luxi-RPC call to have a configurable timeout, so that the call behaves nicely with long jobs that have no update. We do this by adding a timeout parameter in the RPC call, and returning a special constant when the timeout is reached without an update. The luxi client will repeatedly call the WaitForJobChanges until it gets a real change. The timeout is hardcoded as half the RWTO value. The patch also removes an unused variable (new_state) from the WaitForJobChanges method. Reviewed-by: imsnah,ultrotter
-
Alexander Schreiber authored
Fix gnt-instance modify for HVM parameters This patch makes gnt-instance modify work again for the advanced HVM parameters after it was broken by other changes. Reviewed-by: ultrotter
-
Guido Trotter authored
Reviewed-by: imsnah
-
- Aug 28, 2008
-
-
Michael Hanselmann authored
Reported by Iustin. Reviewed-by: iustinp
-
Guido Trotter authored
By design if an empty list of locks is acquired from a set, no locks are acquired, and thus release() cannot be called on the set. On the other hand if None is passed instead of the list, the whole set is acquired, and must later be released. When acquiring whole empty sets, a release must happen too, because the set-lock is acquired. Since we used to overwrite the required locks (needed_locks) with the acquired ones, we weren't able to distinguish the two cases (empty list of locks required, and all locks required, but an empty list returned because the set is empty). Valid solutions include: (1) forbidding the acquire of empty lists of locks (2) skipping the acquire/release on empty lists of locks (3) separating the to-acquire and the acquired list This patch implements the third approach, and thus LUs will find acquired locks in the acquired_locks dict, rather than in needed_locks. The LUs which used this feature before have been updated. This makes it easier because it doesn't force LUs to do more checks on corner cases, which are easily forgettable (1) and allows more flexibility if we want LUs to release (part-of) the locks (which is still a possibly scary operation, but anyway). This easily combines with (2) should we choose to implement it. Reviewed-by: imsnah
-
- Aug 27, 2008
-
-
Michael Hanselmann authored
A job should only exist once in memory. After the cache is cleaned, there can still be references to a job somewhere else. If there are multiple instances, one can get updated while a function is waiting for changes on another instance. By using weakref.WeakValueDictionary, which automatically removes instances as soon as there are no strong references to it anymore, we can solve this problem. Reviewed-by: iustinp
-
Michael Hanselmann authored
Reviewed-by: ultrotter
-
Michael Hanselmann authored
It can be confusing otherwise. Reviewed-by: ultrotter
-
Iustin Pop authored
This is a result of the log timestamp changes. Reviewed-by: imsnah
-
Iustin Pop authored
Seems noone ran a burnin lately :) Reviwed-by: amischenko,ultrotter
-
Michael Hanselmann authored
This is a large patch, but I can't figure out how to split it without breaking stuff. The old way of getting messages by always getting the last one didn't bring all messages to the client if they were added too fast, thereby making commands like “gnt-cluster verify” less than useful. These changes now introduce some sort a serial number per log entry to keep track what message a client already received. They also remove the log lock per opcode to make reading log entries thread safe. Reviewed-by: ultrotter
-
- Aug 26, 2008
-
-
Michael Hanselmann authored
This gives continous output instead it being buffered. Reviewed-by: ultrotter
-
Michael Hanselmann authored
Currently it can only be enabled by modifying utils.py, but we can add a command line parameter later if needed. Reviewed-by: schreiberal
-
- Aug 25, 2008
-
-
Michael Hanselmann authored
Reviewed-by: ultrotter
-
Michael Hanselmann authored
I forgot to remove these when converting the QA configuration from YAML to JSON. Reviewed-by: ultrotter
-
- Aug 19, 2008
-
-
Michael Hanselmann authored
Vim doesn't recognize the format automatically. Reviewed-by: ultrotter
-
- Aug 18, 2008
-
-
Guido Trotter authored
As for LUQueryInstances the first version just acquires a shared lock on all nodes. In the future further optimizations are possible, as outlined by comments in the code. Reviewed-by: imsnah
-
Guido Trotter authored
This first version acquires a shared lock on all requested instances and their nodes. In the future it can be improved by acquiring less locks if no dynamic fields have been asked, and/or by locking just primary nodes. Reviewed-by: imsnah
-
Guido Trotter authored
A few more tests written while bug-hunting. One of them shows a real issue, at last. :) Reviewed-by: imsnah
-
Guido Trotter authored
I was hunting for a bug in my code and thought the culprit was in the locking library, so I added a test to check. Unfortunately turns out it wasn't. :( Committing the test anyway, while still trying to figure out what's wrong... Reviewed-by: imsnah
-
Guido Trotter authored
If a list with a duplicate value is passed to a lockset what the code now does is to try to acquire the lock twice, generating a double-acquire exception in the SharedLock code. This is definitely an issue. In order to solve it we can either forbit double values in a list or just delete the duplicates. In this patch we go for the latter solution, removing any duplicate values when creating the acquire_list. Reviewed-by: imsnah
-
Guido Trotter authored
If a locking level wasn't specified locking used to stop. This means that if one, for example, didn't specify anything at the LEVEL_INSTANCE level, no locks at the LEVEL_NODE level were acquired either. With this patch we force _LockAndExecLU to be called for all existing levels, and break the recursion if the level doesn't exist in locking.LEVELS. Reviewed-by: imsnah
-
Guido Trotter authored
The check for the reboot type can be done without any locks held, so we'll move it to ExpandNames. Plus, we note in a FIXME that if the reboot type is not full, we can probably just lock the primary node, and leave the secondary unlocked. Reviewed-by: imsnah
-
Michael Hanselmann authored
We no longer use YAML in Ganeti at all. This patch converts the QA configuration from YAML to JSON. JSON doesn't support comments and I had to use a hack with fields starting with '#'. Reviewed-by: ultrotter
-
Michael Hanselmann authored
Reviewed-by: schreiberal
-
Michael Hanselmann authored
By using this Linux-specific way we don't have to care about removing the socket file when quitting or starting (after an unclean shutdown). For a more detailed description, see the comment in the patch. Reviewed-by: schreiberal
-
Michael Hanselmann authored
This patch also sorts the list. Reviewed-by: schreiberal
-
Michael Hanselmann authored
Reviewed-by: ultrotter
-
Michael Hanselmann authored
The cluster no longer keeps individual host's SSH key, but rather aliases all of them to the cluster name. Reviewed-by: ultrotter
-
Michael Hanselmann authored
Apparently it was forgotten when import the remote API QA tests. Reviewed-by: schreiberal
-
- Aug 15, 2008
-
-
Michael Hanselmann authored
This option will be used to add nodes to the cluster without asking the user to confirm the key. Together with key based authentication this can be used in the QA tests. Reviewed-by: ultrotter
-
Michael Hanselmann authored
This will be used to add nodes without user interaction, specifically in QA tests. Reviewed-by: ultrotter
-