Skip to content
Snippets Groups Projects
  1. Apr 06, 2011
    • Michael Hanselmann's avatar
      LUInstanceQueryData: Don't acquire locks unless requested · dae661a4
      Michael Hanselmann authored
      
      Until now LUInstanceQueryData always acquired locks for the instance(s)
      and nodes involved. In combination with long-running operations this
      prevented the use of “gnt-instance info”, even with the “--static”
      option. With this patch, locks are only acquired when explicitely
      requested in the opcode (like all query operations).
      
      Signed-off-by: default avatarMichael Hanselmann <hansmi@google.com>
      Reviewed-by: default avatarIustin Pop <iustin@google.com>
      dae661a4
    • Iustin Pop's avatar
      Increase the lock timeouts before we block-acquire · d385a174
      Iustin Pop authored
      
      This has been observed to cause problems on real clusters via the
      following mechanism:
      
      - a long job (e.g. a replace-disks) is keeping an exclusive lock on an
        instance
      - the watcher starts and submits its query instances opcode which
        wants shared locks for all instances
      - after about an hour, the watcher job falls back to blocking acquire,
        after having acquired all other locks
      - any instance opcode that wants an exclusive lock for an instance
        cannot start until the watcher has finished, even though there's no
        actual operation on that instance
      
      In order to alleviate this problem, we simply increase the max timeout
      until lock acquires are sent back to either blocking acquire or
      priority increase. The timeout is computed such that we wait ~10 hours
      (instead of one) for this to happen, which should be within the
      maximum lifetime of a reasonable opcode on a healthy cluster. The
      timeout also means that priority increases will happen every half hour.
      
      We also increase the max wait interval to 15 seconds, otherwise we'd
      have too many retries with the increased interval.
      
      Signed-off-by: default avatarIustin Pop <iustin@google.com>
      Reviewed-by: default avatarMichael Hanselmann <hansmi@google.com>
      d385a174
  2. Apr 04, 2011
    • Iustin Pop's avatar
      daemon.py: move startup log message before prep_fn · fe295df3
      Iustin Pop authored
      
      Before this, the output in the rapi daemon log was:
      2011-04-04 03:09:51,026: ganeti-rapi pid=17447 INFO Reading users file
      at /var/lib/ganeti/rapi/users
      2011-04-04 03:09:51,027: ganeti-rapi pid=17447 INFO ganeti-rapi daemon
      startup
      
      Which is confusing, as it might look like the read of the users file
      is part of the previous run. This is because we log the 'daemon
      startup' message after the prepare_fn, which can log things on its
      own.
      
      The patch simply moves the 'daemon startup' message just before
      prepare_fn call.
      
      Signed-off-by: default avatarIustin Pop <iustin@google.com>
      Reviewed-by: default avatarMichael Hanselmann <hansmi@google.com>
      fe295df3
    • Iustin Pop's avatar
      Display the actual memory values in N+1 failures · 0942620b
      Iustin Pop authored
      
      This changes the display from:
      Mon Apr  4 02:29:46 2011 * Verifying N+1 Memory redundancy
      Mon Apr  4 02:29:46 2011   - ERROR: node node2: not enough memory to
      accomodate instance failovers should node node1 fail
      
      To:
      
      Mon Apr  4 02:32:50 2011 * Verifying N+1 Memory redundancy
      Mon Apr  4 02:32:50 2011   - ERROR: node node2: not enough memory to
      accomodate instance failovers should node node1 fail (33536MiB needed,
      27910MiB available)
      
      Signed-off-by: default avatarIustin Pop <iustin@google.com>
      Reviewed-by: default avatarMichael Hanselmann <hansmi@google.com>
      0942620b
  3. Mar 31, 2011
  4. Mar 24, 2011
  5. Mar 17, 2011
  6. Mar 16, 2011
    • Michael Hanselmann's avatar
      locking: Fix race condition in lock monitor · e4e35357
      Michael Hanselmann authored
      
      In some rare cases it can happen that a lock is re-created very soon
      after deletion, while the old instance hasn't been destructed yet. In
      such a case the code would detect a duplicate name and raise an
      exception.
      
      We have seen at least one case where this happened during the creation
      of many instances. It is not exactly clear how it came to be, but it
      appears to have occurred while different jobs fought for locks with
      short timeouts (in the case of instance creation locks are added at this
      stage and removed shortly after if not all locks can be acquired).
      
      The issue is fixed by removing the check for duplicate names. To still
      guarantee a stable sort order for the lock information as shown by
      “gnt-debug locks”, a registration number is recorded for each lock in
      the monitor.
      
      A unittest is included to check for the situation.
      
      Signed-off-by: default avatarMichael Hanselmann <hansmi@google.com>
      Reviewed-by: default avatarIustin Pop <iustin@google.com>
      e4e35357
  7. Mar 15, 2011
  8. Mar 11, 2011
  9. Mar 10, 2011
  10. Mar 07, 2011
  11. Mar 04, 2011
  12. Mar 02, 2011
  13. Mar 01, 2011
  14. Feb 28, 2011
  15. Feb 25, 2011
  16. Feb 24, 2011
  17. Feb 23, 2011
  18. Feb 22, 2011
  19. Feb 21, 2011
    • Iustin Pop's avatar
      Update news and bump version for 2.4.0 rc2 · e41a1c0c
      Iustin Pop authored
      
      Signed-off-by: default avatarIustin Pop <iustin@google.com>
      Reviewed-by: default avatarRené Nussbaumer <rn@google.com>
      v2.4.0rc2
      e41a1c0c
    • Iustin Pop's avatar
      Merge branch 'devel-2.4' into stable-2.4 · b31393a1
      Iustin Pop authored
      
      * devel-2.4: (23 commits)
        Fix pylint warnings
        Change the list formatting to a 'special' chars
        Add support for merging node groups
        Add option to rename groups on conflict
        Fix minor docstring typo
        Fix HV/OS parameter validation on non-vm nodes
        NodeQuery: mark live fields as UNAVAIL for non-vm_capable nodes
        NodeQuery: don't query non-vm_capable nodes
        Remove superfluous redundant requirement
        Don't remove master_candidate flag from merged nodes
        Use a consistent ECID base
        listrunner: convert from getopt to optparse
        listrunner: fix agent usage
        Revert "Disable the cluster-merge tool for the moment"
        Fix cluster-merging by not stopping noded
        Fix error msg for instances on offline nodes
        Minor reordering to match param order
        cluster verify and instance disks on offline nodes
        Cluster verify and N+1 warnings for offline nodes
        Handle gnt-instance shutdown --all for empty clusters
        Use gnt-node add --force-join to add foreign nodes
        Add --force-join option to gnt-node add
        Fix iterating over node groups
      
      Of the above commits present in the devel-2.4 branch, only the “Add
      --force-join option to gnt-node add” is a potential issue, but this
      has been QA-ed successfully. The other fixes are split in three
      groups:
      
      - non-core changes (cluster-merge, listrunner)
      - trivial fixes (docstrings, etc.)
      - bugs that we want fixed
      
      As such, instead of cherry-picking only individual patches, I propose
      that we unify stable and devel 2.4 and make a new RC out of the
      result.
      
      Signed-off-by: default avatarIustin Pop <iustin@google.com>
      Reviewed-by: default avatarMichael Hanselmann <hansmi@google.com>
      b31393a1
  20. Feb 18, 2011
Loading