Commit 11705e3d authored by Iustin Pop's avatar Iustin Pop
Browse files

Optimise cli.JobExecutor with many pending jobs



In the case we submit many pending jobs (> 100) to the masterd, the
JobExecutor 'spams' the master daemon with status requests for the
status of all the jobs, even though in the end it will only choose a
single job for polling.

This is very sub-optimal, because when the master is busy processing
small/fast jobs, this query forces reading all the jobs from
this. Restricting the 'window' of jobs that we query from the entire
set to a smaller subset makes a huge difference (masterd only, 0s
delay jobs, all jobs to tmpfs thus no I/O involved):

- submitting/waiting for 500 jobs:
  - before: ~21 s
  - after:   ~5 s
- submitting/waiting for 1K jobs:
  - before: ~76 s
  - after:   ~8 s

This is with a batch of 25 jobs. With a batch of 50 jobs, it goes from
8s to 12s. I think that choosing the 'best' job for nice output only
matters with a small number of jobs, and that for more than that
people will not actually watch the jobs. So changing from 'perfect
job' to 'best job in the first 25' should be OK.

Note that most jobs won't execute as fast as 0 delay, but this is
still a good improvement.
Signed-off-by: default avatarIustin Pop <iustin@google.com>
Reviewed-by: default avatarGuido Trotter <ultrotter@google.com>
Reviewed-by: default avatarMichael Hanselmann <hansmi@google.com>
parent 0c009cc5
......@@ -257,6 +257,9 @@ _PRIONAME_TO_VALUE = dict(_PRIORITY_NAMES)
QR_UNKNOWN,
QR_INCOMPLETE) = range(3)
#: Maximum batch size for ChooseJob
_CHOOSE_BATCH = 25
class _Argument:
def __init__(self, min=0, max=None): # pylint: disable=W0622
......@@ -3055,7 +3058,8 @@ class JobExecutor(object):
"""
assert self.jobs, "_ChooseJob called with empty job list"
result = self.cl.QueryJobs([i[2] for i in self.jobs], ["status"])
result = self.cl.QueryJobs([i[2] for i in self.jobs[:_CHOOSE_BATCH]],
["status"])
assert result
for job_data, status in zip(self.jobs, result):
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment