1. 09 Apr, 2010 1 commit
    • Guido Trotter's avatar
      Fix new pylint errors · fe7c59d5
      Guido Trotter authored
      
      
      Under squeeze pylint reports the following errors:
      ************* Module ganeti.serializer
      E1103:155:LoadSignedJson: Instance of 'False' has no 'get' member (but some types could not be inferred)
      ************* Module ganeti-masterd
      E1103:166:ClientRqHandler.handle: Instance of 'False' has no 'get' member (but some types could not be inferred)
      E1103:167:ClientRqHandler.handle: Instance of 'False' has no 'get' member (but some types could not be inferred)
      ************* Module gnt-instance
      E1103:431:BatchCreate: Instance of 'False' has no 'keys' member (but some types could not be inferred)
      
      For the first two cases it's actually wrong: we had checked before that
      the variable on which "get" is called is actually a dict. In the third
      case though such check doesn't exist, so we add it. Then we silence the
      error all three times.
      Signed-off-by: default avatarGuido Trotter <ultrotter@google.com>
      Reviewed-by: default avatarIustin Pop <iustin@google.com>
      fe7c59d5
  2. 18 Mar, 2010 1 commit
  3. 18 Feb, 2010 1 commit
  4. 13 Jan, 2010 1 commit
  5. 05 Jan, 2010 1 commit
    • Iustin Pop's avatar
      Introduce a Luxi call for GetTags · 7699c3af
      Iustin Pop authored
      
      
      This changes from submitting jobs to get the tags (in cli scripts) to
      queries, which (since the tags query is a cheap one) should be much
      faster.
      
      The tags queries are already done without locks (in the generic query
      paths for instances/nodes/cluster), so this shouldn't break tags query
      via gnt-* list-tags.
      
      On a small cluster, the runtime of gnt-cluster/gnt-instance list tags
      more than halves; on a big cluster (with many MCs) I expect it to be
      more than 5 times faster. The speed of the tags get is not the main
      gain, it is eliminating a job when a simple query is enough.
      Signed-off-by: default avatarIustin Pop <iustin@google.com>
      Reviewed-by: default avatarRené Nussbaumer <rn@google.com>
      7699c3af
  6. 04 Jan, 2010 3 commits
  7. 25 Nov, 2009 1 commit
  8. 06 Nov, 2009 2 commits
  9. 02 Nov, 2009 1 commit
  10. 29 Sep, 2009 1 commit
  11. 25 Sep, 2009 1 commit
  12. 17 Sep, 2009 1 commit
  13. 15 Sep, 2009 2 commits
  14. 31 Aug, 2009 1 commit
  15. 27 Aug, 2009 1 commit
  16. 26 Aug, 2009 3 commits
  17. 25 Aug, 2009 1 commit
  18. 20 Aug, 2009 1 commit
  19. 10 Aug, 2009 1 commit
  20. 29 Jul, 2009 1 commit
  21. 25 Jul, 2009 1 commit
    • Guido Trotter's avatar
      Collapse daemon's main function · 04ccf5e9
      Guido Trotter authored
      
      
      With three ganeti daemons, and one or two more coming, the daemon's main
      function started becoming too much cut&pasted code. Collapsing most of
      it in a daemon.GenericMain function. Some more code could be collapsed
      between the two http-based daemons, but since the new daemons won't be
      http-based we won't do it right now.
      
      As a bonus a functionality for overriding the network port on the
      command line for all network based nodes is added.
      Signed-off-by: default avatarGuido Trotter <ultrotter@google.com>
      04ccf5e9
  22. 24 Jul, 2009 2 commits
  23. 23 Jul, 2009 1 commit
  24. 19 Jul, 2009 1 commit
    • Iustin Pop's avatar
      Add a luxi call for multi-job submit · 56d8ff91
      Iustin Pop authored
      
      
      As a workaround for the job submit timeouts that we have, this patch
      adds a new luxi call for multi-job submit; the advantage is that all the
      jobs are added in the queue and only after the workers can start
      processing them.
      
      This is definitely faster than per-job submit, where the submission of
      new jobs competes with the workers processing jobs.
      
      On a pure no-op OpDelay opcode (not on master, not on nodes), we have:
        - 100 jobs:
          - individual: submit time ~21s, processing time ~21s
          - multiple:   submit time 7-9s, processing time ~22s
        - 250 jobs:
          - individual: submit time ~56s, processing time ~57s
                        run 2:      ~54s                  ~55s
          - multiple:   submit time ~20s, processing time ~51s
                        run 2:      ~17s                  ~52s
      
      which shows that we indeed gain on the client side, and maybe even on
      the total processing time for a high number of jobs. For just 10 or so I
      expect the difference to be just noise.
      
      This will probably require increasing the timeout a little when
      submitting too many jobs - 250 jobs at ~20 seconds is close to the
      current rw timeout of 60s.
      Signed-off-by: default avatarIustin Pop <iustin@google.com>
      Reviewed-by: default avatarGuido Trotter <ultrotter@google.com>
      (cherry picked from commit 2971c913)
      56d8ff91
  25. 14 Jul, 2009 1 commit
    • Guido Trotter's avatar
      ganeti-masterd: avoid SimpleConfigReader · b2890442
      Guido Trotter authored
      
      
      SimpleStore is a lot less heavyweight than SimpleConfigReader, and to
      just get the master name we can use that. This is the only usage of
      SimpleConfigReader currently, but we're not going to delete the class,
      as new usages will come in for ganeti-confd (in 2.1). Using it there,
      though, will make the class even more heavy to load, so it makes sense
      for this simple usage to be converted.
      Signed-off-by: default avatarGuido Trotter <ultrotter@google.com>
      b2890442
  26. 08 Jul, 2009 2 commits
  27. 07 Jul, 2009 1 commit
  28. 15 Jun, 2009 1 commit
  29. 21 May, 2009 1 commit
    • Iustin Pop's avatar
      Add a luxi call for multi-job submit · 2971c913
      Iustin Pop authored
      
      
      As a workaround for the job submit timeouts that we have, this patch
      adds a new luxi call for multi-job submit; the advantage is that all the
      jobs are added in the queue and only after the workers can start
      processing them.
      
      This is definitely faster than per-job submit, where the submission of
      new jobs competes with the workers processing jobs.
      
      On a pure no-op OpDelay opcode (not on master, not on nodes), we have:
        - 100 jobs:
          - individual: submit time ~21s, processing time ~21s
          - multiple:   submit time 7-9s, processing time ~22s
        - 250 jobs:
          - individual: submit time ~56s, processing time ~57s
                        run 2:      ~54s                  ~55s
          - multiple:   submit time ~20s, processing time ~51s
                        run 2:      ~17s                  ~52s
      
      which shows that we indeed gain on the client side, and maybe even on
      the total processing time for a high number of jobs. For just 10 or so I
      expect the difference to be just noise.
      
      This will probably require increasing the timeout a little when
      submitting too many jobs - 250 jobs at ~20 seconds is close to the
      current rw timeout of 60s.
      Signed-off-by: default avatarIustin Pop <iustin@google.com>
      Reviewed-by: default avatarGuido Trotter <ultrotter@google.com>
      2971c913
  30. 04 May, 2009 1 commit
  31. 06 Apr, 2009 2 commits
    • Iustin Pop's avatar
      Disable synchronous (locking) queries · 77921a95
      Iustin Pop authored
      This patch raises an error in the master daemon in case the user
      requests a locking query; accordingly, all clients were modified to send
      only lockless queries. This is short-term fix, for proper fix the
      clients should be modified to submit a job when the user request a
      locking query.
      
      The other approach would be to ignore the flag passed by the client;
      this would be worse as client's wouldn't get at least an error.
      
      The possible impact of this is multiple:
        - some commands could have been not converted, and thus fail; this
          can be remedied easily
        - the consistency of commands is lost; e.g. node failover will not
          lock the node *while we get the node info*, so we could miss some
          data; this is again in the thread of atomic operations which are
          missing in the current model of query-and-act from gnt-* scripts
      
      Reviewed-by: imsnah, ultrotter
      77921a95
    • Iustin Pop's avatar
      Add some more debugging info to masterd · e566ddbd
      Iustin Pop authored
      This patch will log data about queries, which are today completely
      invisible (at the default log level) in the master log file.
      
      Reviewed-by: imsnah
      e566ddbd