Skip to content
Snippets Groups Projects
  1. Dec 21, 2011
  2. Nov 24, 2011
    • Michael Hanselmann's avatar
      ConfigWriter: Fix epydoc error · 1d4930b9
      Michael Hanselmann authored
      
      The parameter is called “mods”, not “modes”.
      
      Signed-off-by: default avatarMichael Hanselmann <hansmi@google.com>
      Reviewed-by: default avatarAndrea Spadaccini <spadaccio@google.com>
      (cherry picked from commit 1730d4a1)
      1d4930b9
    • Michael Hanselmann's avatar
      LUGroupAssignNodes: Fix node membership corruption · 54c31fd3
      Michael Hanselmann authored
      
      Note: This bug only manifests itself in Ganeti 2.5, but since the
      problematic code also exists in 2.4, I decided to fix it there.
      
      If a node was assigned to a new group using “gnt-group assign-nodes” the
      node object's group would be changed, but not the duplicate member list
      in the group object. The latter is an optimization to require fewer
      locks for other operations. The per-group member list is only kept in
      memory and not written to disk.
      
      Ganeti 2.5 starts to make use of the data kept in the per-group member
      list and consequently fails when it is out of date. The following
      commands can be used to reproduce the issue in 2.5 (in 2.4 the issue was
      confirmed using additional logging):
      
        $ gnt-group add foo
        $ gnt-group assign-nodes foo $(gnt-node list --no-header -o name)
        $ gnt-cluster verify  # Fails with KeyError
      
      This patch moves the code modifying node and group objects into
      “config.ConfigWriter” to do the complete operation under the config
      lock, and also to avoid making use of side-effects of modifying objects
      without calling “ConfigWriter.Update”. A unittest is included.
      
      Signed-off-by: default avatarMichael Hanselmann <hansmi@google.com>
      Reviewed-by: default avatarIustin Pop <iustin@google.com>
      (cherry picked from commit 218f4c3d)
      54c31fd3
    • Michael Hanselmann's avatar
      Fix pylint warning on unreachable code · 9c4f4dd6
      Michael Hanselmann authored
      
      Commit c50452c3 added an exception when all instances should be
      evacuated off a node, but did so in a way which made pylint complain
      about unreachable code.
      
      Signed-off-by: default avatarMichael Hanselmann <hansmi@google.com>
      Reviewed-by: default avatarIustin Pop <iustin@google.com>
      9c4f4dd6
  3. Nov 23, 2011
  4. Nov 16, 2011
    • Iustin Pop's avatar
      htools: rework message display construction · bdd8c739
      Iustin Pop authored
      
      While diagnosing some (unrelated) memory usage in htools, I've
      stumbled upon some very bad behaviour in checkData: mapAccum is
      non-strict, and the tuple we use also, so that results in the list of
      list of messages being very bad space-wise (hundreds of MB of memory
      for a simulated cluster with thousands of nodes, all with errors).
      
      The new, explicit reuse of the old message list has a linear memory
      behaviour. The only downside is that messages are listed in the
      reverse order (which I'll fix on master).
      
      Signed-off-by: default avatarIustin Pop <iustin@google.com>
      Reviewed-by: default avatarMichael Hanselmann <hansmi@google.com>
      bdd8c739
    • Iustin Pop's avatar
      hbal: handle empty node groups · 2072221f
      Iustin Pop authored
      
      This patch changes an internal assert (which can only be triggered
      when a node group is empty) into properly handling this case (and
      returning empty node/instance lists).
      
      While we could handle this in the backend (Cluster.splitNodeGroup)
      this would actually mean than we change the behaviour for a cluster
      with just two node groups, once of which is empty (where today we
      don't require a node group argument).
      
      Signed-off-by: default avatarIustin Pop <iustin@google.com>
      Reviewed-by: default avatarMichael Hanselmann <hansmi@google.com>
      2072221f
  5. Nov 15, 2011
  6. Nov 08, 2011
  7. Nov 04, 2011
  8. Oct 27, 2011
  9. Oct 26, 2011
  10. Oct 21, 2011
  11. Oct 20, 2011
  12. Oct 19, 2011
  13. Oct 18, 2011
  14. Oct 17, 2011
  15. Oct 12, 2011
  16. Oct 07, 2011
  17. Oct 04, 2011
  18. Oct 03, 2011
  19. Sep 30, 2011
    • Michael Hanselmann's avatar
      LUClusterVerifyGroup: Spread SSH checks over more nodes · 64c7b383
      Michael Hanselmann authored
      
      When verifying a group the code would always check SSH to all nodes in
      the same group, as well as the first node for every other group. On big
      clusters this can cause issues since many nodes will try to connect to
      the first node of another group at the same time. This patch changes the
      algorithm to choose a different node every time.
      
      A unittest for the selection algorithm is included.
      
      Signed-off-by: default avatarMichael Hanselmann <hansmi@google.com>
      Reviewed-by: default avatarIustin Pop <iustin@google.com>
      64c7b383
    • Iustin Pop's avatar
      Optimise cli.JobExecutor with many pending jobs · 11705e3d
      Iustin Pop authored
      
      In the case we submit many pending jobs (> 100) to the masterd, the
      JobExecutor 'spams' the master daemon with status requests for the
      status of all the jobs, even though in the end it will only choose a
      single job for polling.
      
      This is very sub-optimal, because when the master is busy processing
      small/fast jobs, this query forces reading all the jobs from
      this. Restricting the 'window' of jobs that we query from the entire
      set to a smaller subset makes a huge difference (masterd only, 0s
      delay jobs, all jobs to tmpfs thus no I/O involved):
      
      - submitting/waiting for 500 jobs:
        - before: ~21 s
        - after:   ~5 s
      - submitting/waiting for 1K jobs:
        - before: ~76 s
        - after:   ~8 s
      
      This is with a batch of 25 jobs. With a batch of 50 jobs, it goes from
      8s to 12s. I think that choosing the 'best' job for nice output only
      matters with a small number of jobs, and that for more than that
      people will not actually watch the jobs. So changing from 'perfect
      job' to 'best job in the first 25' should be OK.
      
      Note that most jobs won't execute as fast as 0 delay, but this is
      still a good improvement.
      
      Signed-off-by: default avatarIustin Pop <iustin@google.com>
      Reviewed-by: default avatarGuido Trotter <ultrotter@google.com>
      Reviewed-by: default avatarMichael Hanselmann <hansmi@google.com>
      11705e3d
Loading