1. 23 Jun, 2010 2 commits
  2. 17 Jun, 2010 4 commits
  3. 15 Jun, 2010 2 commits
  4. 11 Jun, 2010 6 commits
  5. 01 Jun, 2010 1 commit
    • Iustin Pop's avatar
      Add a new opcode timestamp field · b9b5abcb
      Iustin Pop authored
      Since the current start_timestamp opcode attribute refers to the inital
      start time, before locks are acquired, it's not useful to determine the
      actual execution order of two opcodes/jobs competing for the same lock.
      This patch adds a new field, exec_timestamp, that is updated when the
      opcode moves from OP_STATUS_WAITLOCK to OP_STATUS_RUNNING, thus allowing
      a clear view of the execution history. The new field is visible in the
      job output via the 'opexec' field.
      Signed-off-by: default avatarIustin Pop <iustin@google.com>
      Reviewed-by: default avatarGuido Trotter <ultrotter@google.com>
  6. 08 Mar, 2010 2 commits
  7. 13 Jan, 2010 4 commits
  8. 04 Jan, 2010 4 commits
  9. 28 Dec, 2009 1 commit
  10. 25 Nov, 2009 1 commit
  11. 06 Nov, 2009 2 commits
    • Guido Trotter's avatar
      Processor: support a unique execution id · adfa97e3
      Guido Trotter authored
      When the processor is executing a job, it can export the execution id to
      its callers. This is not supported for Queries, as they're not executed
      in a job.
      Signed-off-by: default avatarGuido Trotter <ultrotter@google.com>
      Reviewed-by: default avatarIustin Pop <iustin@google.com>
    • Iustin Pop's avatar
      Fix pylint 'E' (error) codes · 6c881c52
      Iustin Pop authored
      This patch adds some silences and tweaks the code slightly so that
      “pylint --rcfile pylintrc -e ganeti” doesn't give any errors.
      The biggest change is in jqueue.py, the move of _RequireOpenQueue out of
      the JobQueue class. Since that is actually a function and not a method
      (never used as such) this makes sense, and also silences two pylint
      Another real code change is in utils.py, where FieldSet.Matches will
      return None instead of False for failure; this still works with the way
      this class/method is used, and makes more sense (it resembles more
      closely the re.match return values).
      Signed-off-by: default avatarIustin Pop <iustin@google.com>
      Reviewed-by: default avatarGuido Trotter <ultrotter@google.com>
  12. 03 Nov, 2009 1 commit
  13. 12 Oct, 2009 1 commit
  14. 25 Sep, 2009 1 commit
  15. 17 Sep, 2009 1 commit
  16. 15 Sep, 2009 4 commits
  17. 07 Sep, 2009 1 commit
    • Iustin Pop's avatar
      Optimise multi-job submit · 009e73d0
      Iustin Pop authored
      Currently, on multi-job submits we simply iterate over the
      single-job-submit function. This means we grab a new serial, write and
      replicate (and wait for the remote nodes to ack) the serial file, and
      only then create the job file; this is repeated N times, once for each
      Since job identifiers are ‘cheap’, it's simpler to simply grab at the
      start a block of new IDs, write and replicate the serial count file a
      single time, and then proceed with the jobs as before. This is a cheap
      change that reduces I/O and reduces slightly the CPU consumption of the
      master daemon: submit time seems to be cut in half for big batches of
      jobs and the masterd cpu time by (I can't get consistent numbers)
      between 15%-50%.
      Note that this doesn't change anything for single-job submits and most
      probably for < 5 job submits either.
      Signed-off-by: default avatarIustin Pop <iustin@google.com>
      Reviewed-by: default avatarMichael Hanselmann <hansmi@google.com>
  18. 03 Sep, 2009 1 commit
  19. 27 Aug, 2009 1 commit