Skip to content
Snippets Groups Projects
  1. Apr 29, 2011
  2. Apr 28, 2011
  3. Apr 27, 2011
    • Iustin Pop's avatar
      Replace disks: keep the meta device in the same VG · fd09d178
      Iustin Pop authored
      
      This patch enhances the multi-VG support in replace disks, by keeping
      the meta device in the same VG, as opposed to moving it to the data
      device VG (note that we don't have a way to create the meta in a
      different VG in the first place, but at least we correctly handle a
      custom config).
      
      Signed-off-by: default avatarIustin Pop <iustin@google.com>
      Reviewed-by: default avatarMichael Hanselmann <hansmi@google.com>
      fd09d178
    • Doug Dumitru's avatar
      Fix for multiple VGs - PlainToDrbd and replace-disks · 88aa7f66
      Doug Dumitru authored
      
      Converting an instance from 'plain' to 'drbd'.  The old code would
      create the drbd volumes in the default VG and then the renames would
      fail.  This fix pulls the plain VG names from the existing volumes and
      places it into the new disk template.
      
      Running 'replace-disks' has a similar issue with the new disks going
      into the wrong VG and then the rename failing.
      
      Their might be a similar issue with 'recreate-disks', but I actually
      have no idea what recreate-disks does, so did not look into it.
      
      Signed-off-by: default avatarIustin Pop <iustin@google.com>
      Reviewed-by: default avatarIustin Pop <iustin@google.com>
      Reviewed-by: default avatarMichael Hanselmann <hansmi@google.com>
      88aa7f66
    • Iustin Pop's avatar
      Fix potential data-loss in utils.WriteFile · 437c3e77
      Iustin Pop authored
      
      os.write can do incomplete writes, as long as at least some bytes have
      been written (like write(2)):
      
      >>> os.write(fd, " " * 1300)
      1300
      >>> os.write(fd, " " * 1300)
      1300
      >>> os.write(fd, " " * 1300)
      1300
      >>> os.write(fd, " " * 1300)
      980
      >>> os.write(fd, " " * 1300)
      Traceback (most recent call last):
       File "<stdin>", line 1, in ?
      OSError: [Errno 28] No space left on device
      
      Note that incomplete write that only wrote 980 bytes, before the
      exception.
      
      To workaround this, we simply iterate until all data is
      written. Unittests could be written by using a parameter instead of
      hardcoding os.write and checking for incomplete writes.
      
      Signed-off-by: default avatarIustin Pop <iustin@google.com>
      Reviewed-by: default avatarMichael Hanselmann <hansmi@google.com>
      437c3e77
    • Iustin Pop's avatar
      Improve error messages in cluster verify/OS · 2db04578
      Iustin Pop authored
      
      A few issues in the clarity of the error messages are fixed:
      
      - "ERROR: node node3: OS API version lenny-image": no preposition
        between the parameter type and the OS name, changed to "for
        lenny-image"
      
      - "API version lenny-image differs from reference node node1: 10, 5
        vs. 10, 20, 5, 15": parameters not sorted in display
      
      - "OS variants list lenny-image differs from reference node node1:
        vs. default, i386": empty sets are not clearly delimited, changed to
        add [] around the sets: "node node1: [] vs. [default, i386]"
      
      - "OS parameters lenny-image differs from reference node node1:
        vs. (u'dhcp', u'Whether to enable (yes) or disable (dhcp)')": ugly
        formatting in the OS parameters list, as we used to just "%s" the
        tuple; now it is "reference node node1: [] vs. [dhcp: Whether to
        enable (yes) or disable (dhcp)]"
      
      Signed-off-by: default avatarIustin Pop <iustin@google.com>
      Reviewed-by: default avatarMichael Hanselmann <hansmi@google.com>
      2db04578
    • Iustin Pop's avatar
      Prevent readding of the master node · d833acc6
      Iustin Pop authored
      
      This breaks Ganeti in multiple ways. If we don't make the check in
      gnt-node itself, then bootstrap.SetupNodeDaemon will restart the
      master daemon, making the operation fail:
      
        node1# gnt-node add --readd node1
        Cannot communicate with the master daemon.
        Is it running and listening for connections?
      
      The check in cmdlib is more of a safety check, as we shouldn't reach
      it. If we do (via a bad client), then it will prevent breakage in the
      job queue/config handling.
      
      Signed-off-by: default avatarIustin Pop <iustin@google.com>
      Reviewed-by: default avatarMichael Hanselmann <hansmi@google.com>
      d833acc6
    • Iustin Pop's avatar
      Fix punctuation in an error message · cce6f357
      Iustin Pop authored
      
      IIRC we don't use punctuation at the end of error messages.
      
      Signed-off-by: default avatarIustin Pop <iustin@google.com>
      Reviewed-by: default avatarMichael Hanselmann <hansmi@google.com>
      cce6f357
  4. Apr 21, 2011
  5. Apr 20, 2011
  6. Apr 19, 2011
    • Iustin Pop's avatar
      Fix master IP activation in failover with no-voting · 675e2bf5
      Iustin Pop authored
      
      Thanks to net.for.hub@gmail.com for reporting this. The logic in
      masterd.CheckMasterd did an early return in case of no_voting, hence
      skipping the master IP activation. We just change the ifs to not
      return but simply continue through the function.
      
      Signed-off-by: default avatarIustin Pop <iustin@google.com>
      Reviewed-by: default avatarGuido Trotter <ultrotter@google.com>
      675e2bf5
    • Iustin Pop's avatar
      disk wiping: fix bug in chunk size computation · 6e7f0cd9
      Iustin Pop authored
      
      The current wipe_chunk_size computation is doing min(int_value,
      float_value). For small disks (below 10GiB), the actual formula will
      result into the float value being chosen. This results into very
      interesting behaviour:
      
      Wiping disk 0, offset 102.4, chunk 102.4
      Wiping disk 0, offset 204.8, chunk 102.4
      …
      Wiping disk 0, offset 921.6, chunk 102.4
      Wiping disk 0, offset 1024.0, chunk 1.13686837722e-13
      
      Since these are passed to dd via %d, this will result into the call to
      dd specifying offset 1024 and count 0, which will fail.
      
      We just need to enforce conversion to int, in order to not get bitten
      by floating point rounding errors.
      
      The patch also reorders some logging messages in order to log the
      chunk size.
      
      Signed-off-by: default avatarIustin Pop <iustin@google.com>
      Reviewed-by: default avatarMichael Hanselmann <hansmi@google.com>
      6e7f0cd9
    • Michael Hanselmann's avatar
      Fix bug in watcher · a0aa6b49
      Michael Hanselmann authored
      
      If “utils.RunParts” were to raise an exception, a log message was
      written and the code continued to run. Due to the exception the
      “results” variable would not be defined.
      
      Also change the code to log a backtrace (getting an exception is rather
      unlikely and having a backtrace is useful) and update one comment.
      
      Signed-off-by: default avatarMichael Hanselmann <hansmi@google.com>
      Reviewed-by: default avatarRené Nussbaumer <rn@google.com>
      a0aa6b49
  7. Apr 14, 2011
  8. Apr 13, 2011
  9. Apr 08, 2011
  10. Apr 07, 2011
  11. Apr 06, 2011
    • Michael Hanselmann's avatar
      LUInstanceQueryData: Don't acquire locks unless requested · dae661a4
      Michael Hanselmann authored
      
      Until now LUInstanceQueryData always acquired locks for the instance(s)
      and nodes involved. In combination with long-running operations this
      prevented the use of “gnt-instance info”, even with the “--static”
      option. With this patch, locks are only acquired when explicitely
      requested in the opcode (like all query operations).
      
      Signed-off-by: default avatarMichael Hanselmann <hansmi@google.com>
      Reviewed-by: default avatarIustin Pop <iustin@google.com>
      dae661a4
    • Iustin Pop's avatar
      Increase the lock timeouts before we block-acquire · d385a174
      Iustin Pop authored
      
      This has been observed to cause problems on real clusters via the
      following mechanism:
      
      - a long job (e.g. a replace-disks) is keeping an exclusive lock on an
        instance
      - the watcher starts and submits its query instances opcode which
        wants shared locks for all instances
      - after about an hour, the watcher job falls back to blocking acquire,
        after having acquired all other locks
      - any instance opcode that wants an exclusive lock for an instance
        cannot start until the watcher has finished, even though there's no
        actual operation on that instance
      
      In order to alleviate this problem, we simply increase the max timeout
      until lock acquires are sent back to either blocking acquire or
      priority increase. The timeout is computed such that we wait ~10 hours
      (instead of one) for this to happen, which should be within the
      maximum lifetime of a reasonable opcode on a healthy cluster. The
      timeout also means that priority increases will happen every half hour.
      
      We also increase the max wait interval to 15 seconds, otherwise we'd
      have too many retries with the increased interval.
      
      Signed-off-by: default avatarIustin Pop <iustin@google.com>
      Reviewed-by: default avatarMichael Hanselmann <hansmi@google.com>
      d385a174
  12. Apr 04, 2011
    • Iustin Pop's avatar
      daemon.py: move startup log message before prep_fn · fe295df3
      Iustin Pop authored
      
      Before this, the output in the rapi daemon log was:
      2011-04-04 03:09:51,026: ganeti-rapi pid=17447 INFO Reading users file
      at /var/lib/ganeti/rapi/users
      2011-04-04 03:09:51,027: ganeti-rapi pid=17447 INFO ganeti-rapi daemon
      startup
      
      Which is confusing, as it might look like the read of the users file
      is part of the previous run. This is because we log the 'daemon
      startup' message after the prepare_fn, which can log things on its
      own.
      
      The patch simply moves the 'daemon startup' message just before
      prepare_fn call.
      
      Signed-off-by: default avatarIustin Pop <iustin@google.com>
      Reviewed-by: default avatarMichael Hanselmann <hansmi@google.com>
      fe295df3
    • Iustin Pop's avatar
      Display the actual memory values in N+1 failures · 0942620b
      Iustin Pop authored
      
      This changes the display from:
      Mon Apr  4 02:29:46 2011 * Verifying N+1 Memory redundancy
      Mon Apr  4 02:29:46 2011   - ERROR: node node2: not enough memory to
      accomodate instance failovers should node node1 fail
      
      To:
      
      Mon Apr  4 02:32:50 2011 * Verifying N+1 Memory redundancy
      Mon Apr  4 02:32:50 2011   - ERROR: node node2: not enough memory to
      accomodate instance failovers should node node1 fail (33536MiB needed,
      27910MiB available)
      
      Signed-off-by: default avatarIustin Pop <iustin@google.com>
      Reviewed-by: default avatarMichael Hanselmann <hansmi@google.com>
      0942620b
  13. Mar 31, 2011
  14. Mar 28, 2011
  15. Mar 24, 2011
  16. Mar 17, 2011
  17. Mar 16, 2011
    • Michael Hanselmann's avatar
      locking: Fix race condition in lock monitor · e4e35357
      Michael Hanselmann authored
      
      In some rare cases it can happen that a lock is re-created very soon
      after deletion, while the old instance hasn't been destructed yet. In
      such a case the code would detect a duplicate name and raise an
      exception.
      
      We have seen at least one case where this happened during the creation
      of many instances. It is not exactly clear how it came to be, but it
      appears to have occurred while different jobs fought for locks with
      short timeouts (in the case of instance creation locks are added at this
      stage and removed shortly after if not all locks can be acquired).
      
      The issue is fixed by removing the check for duplicate names. To still
      guarantee a stable sort order for the lock information as shown by
      “gnt-debug locks”, a registration number is recorded for each lock in
      the monitor.
      
      A unittest is included to check for the situation.
      
      Signed-off-by: default avatarMichael Hanselmann <hansmi@google.com>
      Reviewed-by: default avatarIustin Pop <iustin@google.com>
      e4e35357
  18. Mar 15, 2011
  19. Mar 11, 2011
Loading