1. 10 Jul, 2014 1 commit
  2. 12 Jun, 2014 1 commit
    • Dimitris Aragiorgis's avatar
      Support disk hotplug with userspace access · 6b31e28f
      Dimitris Aragiorgis authored
      
      
      Before any hotplug action we assemble the block device. Currently
      call_blockdev_assemble() returns the link_name as calculated by
      _SymlinkBlockDev().
      
      With userspace support we have to return the drive_uri as calculated
      by _CalculateDeviceURI() as well, in order for the drive_add monitor
      command to be able to use it.
      
      Additionally with this patch the runtime files are properly updated
      to include the drive uri as well, and thus, upon instance migration,
      the target process will be started with the correct drive options.
      Signed-off-by: default avatarDimitris Aragiorgis <dimara@grnet.gr>
      Signed-off-by: default avatarKlaus Aehlig <aehlig@google.com>
      Reviewed-by: default avatarKlaus Aehlig <aehlig@google.com>
      6b31e28f
  3. 22 May, 2014 1 commit
  4. 07 Mar, 2014 1 commit
  5. 04 Mar, 2014 1 commit
  6. 20 Feb, 2014 1 commit
  7. 06 Nov, 2013 1 commit
    • Apollon Oikonomopoulos's avatar
      DRBD: ensure peers are UpToDate for dual-primary · 73e15b5e
      Apollon Oikonomopoulos authored
      DrbdAttachNet supports both, normal primary/secondary node operation, and
      (during live migration) dual-primary operation. When resources are newly
      attached, we poll until we find all of them in connected or syncing operation.
      
      Although aggressive, this is enough for primary/secondary operation, because
      the primary/secondary role is not changed from within DrbdAttachNet. However,
      in the dual-primary ("multimaster") case, both peers are subsequently upgraded
      to the primary role.  If - for unspecified reasons - both disks are not
      UpToDate, then a resync may be triggered after both peers have switched to
      primary, causing the resource to disconnect:
      
        kernel: [1465514.164009] block drbd2: I shall become SyncTarget, but I am
          primary!
        kernel: [1465514.171562] block drbd2: ASSERT( os.conn == C_WF_REPORT_PARAMS )
          in /build/linux-rrsxby/linux-3.2.51/drivers/block/drbd/drbd_receiver.c:3245
      
      This seems to be extremely racey and is possibly triggered by some underlying
      network issues (e.g. high latency), but it has been observed in the wild. By
      logging the DRBD resource state in the old secondary, we managed to see a
      resource getting promoted to primary while it was:
      
        WFSyncUUID Secondary/Primary Outdated/UpToDate
      
      We fix this by explicitly waiting for "Connected" cstate and
      "UpToDate/UpToDate" disks, as advised in [1]:
      
        "For this purpose and scenario,
         you only want to promote once you are Connected UpToDate/UpToDate."
      
      [1] http://lists.linbit.com/pipermail/drbd-user/2013-July/020173.html
      
      Signed-off-by: default avatarApollon Oikonomopoulos <apoikos@gmail.com>
      Signed-off-by: default avatarMichele Tartara <mtartara@google.com>
      Reviewed-by: default avatarMichele Tartara <mtartara@google.com>
      Reviewed-by: default avatarKlaus Aehlig <aehlig@google.com>
      73e15b5e
  8. 31 Oct, 2013 1 commit
  9. 30 Oct, 2013 2 commits
  10. 24 Oct, 2013 4 commits
  11. 02 Oct, 2013 1 commit
  12. 27 Sep, 2013 2 commits
  13. 24 Sep, 2013 3 commits
  14. 28 Aug, 2013 1 commit
  15. 23 Aug, 2013 1 commit
  16. 21 Aug, 2013 1 commit
  17. 29 Jul, 2013 1 commit
  18. 15 Jul, 2013 3 commits
    • Helga Velroyen's avatar
      Verify file storage path · 9c1c3c19
      Helga Velroyen authored
      
      
      This patch adds two verification steps to 'gnt-cluster
      verify':
      - The configured file storage directory is checked against
        the allowed file storage directories file.
      - We check whether the configured file storage directory
        is existing and writable on each node.
      Signed-off-by: default avatarHelga Velroyen <helgav@google.com>
      Reviewed-by: default avatarKlaus Aehlig <aehlig@google.com>
      9c1c3c19
    • Helga Velroyen's avatar
      Prepare verification code for new file path verification · 13a6c760
      Helga Velroyen authored
      
      
      This patch prepares the verification code for adding
      a new verification step for the file storage paths:
      - It moves a couple of file storage helper functions from
        bdev to filestorage (since they make more sense there
        and bdev is too big anyway).
      - They rename constants and functions related to the
        verification step where the allowed file paths are
        checked agains forbidden paths to a more expressive name,
        because otherwise they would be confused to be related
        to the verification of the file storage paths against
        the allowed file storage paths.
      - Use the cluster objects helper functions to check
        if file storage is inabled instead of using the utils
        function directly, because it simplifies the code.
      Signed-off-by: default avatarHelga Velroyen <helgav@google.com>
      Reviewed-by: default avatarKlaus Aehlig <aehlig@google.com>
      13a6c760
    • Helga Velroyen's avatar
      backend: remove ENABLE_FILE_STORAGE · 1f7c8208
      Helga Velroyen authored
      
      
      This patch removes the usage of the ENABLE_FILE_STORAGE
      constant in the backend code. To avoid having to pass
      it through various RPC calls, we instead move the check
      to cmdlib.
      Signed-off-by: default avatarHelga Velroyen <helgav@google.com>
      Reviewed-by: default avatarKlaus Aehlig <aehlig@google.com>
      1f7c8208
  19. 10 Jul, 2013 5 commits
  20. 03 Jul, 2013 1 commit
    • Helga Velroyen's avatar
      Fix propagation of storage parameters to/from backend · 52a8a6ae
      Helga Velroyen authored
      
      
      This patch fixes two problems with the storage reporting
      that showed up in the QA for exclusive storage:
      
      - The processing of storage space information for instance
        operations wrongly assumed that the volume group's
        storage information is always the first in the list.
      - The storage parameter 'exclusive storage' was not
        correctly extracted from the list of storage parameters.
      - There was a bug in the preparation of storage unit for
        the node info call in the iallocator. The exclusive
        storage flag was not set for spindles and the format
        of the storage parameters for LVM vgs was a boolean
        and not a list.
      Signed-off-by: default avatarHelga Velroyen <helgav@google.com>
      Reviewed-by: default avatarKlaus Aehlig <aehlig@google.com>
      Reviewed-by: default avatarMichele Tartara <mtartara@google.com>
      52a8a6ae
  21. 02 Jul, 2013 2 commits
  22. 28 Jun, 2013 1 commit
  23. 14 Jun, 2013 3 commits
  24. 13 Jun, 2013 1 commit
    • Thomas Thrainer's avatar
      Index nodes by their UUID · 1c3231aa
      Thomas Thrainer authored
      
      
      No longer index nodes by their name but by their UUID in the cluster
      config. This change changes large parts of the code, as the following
      adjustments were necessary:
       * Change the index key to UUID in the configuration and the
         ConfigWriter, including all methods.
       * Change all cross-references to nodes to use UUID's.
       * External interfaces (command line interface, IAllocator interface,
         hook scripts, etc.) are kept stable.
       * RPC-calls can resolve UUID's as target node arguments, if the RPC
         runner is based on a ConfigWriter instance. The result dictionary is
         presented in the form the nodes are addressed: by UUID if UUID's were
         given, or by name if names were given.
       * Node UUID's are resolved in ExpandNames and then stored in the
         OpCode. This allows to check for node renames if the OpCode is
         reloaded after a cluster restart. This check is currently only done
         for single node parameters.
       * Variable names are renamed to follow the following pattern:
         - Suffix is 'node' or 'nodes': Variable holds Node objects
         - Suffix is 'name' or 'names': Variable holds node names
         - Suffix is 'uuid' or 'uuids': Variable holds node UUID's
       * Tests are adapted.
      Signed-off-by: default avatarThomas Thrainer <thomasth@google.com>
      Reviewed-by: default avatarKlaus Aehlig <aehlig@google.com>
      1c3231aa