design-query-splitting.rst 6.71 KB
Newer Older
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79
Splitting the query and job execution paths


Currently, the master daemon does two main roles:

- execute jobs that change the cluster state
- respond to queries

Due to the technical details of the implementation, the job execution
and query paths interact with each other, and for example the "masterd
hang" issue that we had late in the 2.5 release cycle was due to the
interaction between job queries and job execution.

Furthermore, also because technical implementations (Python lacking
read-only variables being one example), we can't share internal data
structures for jobs; instead, in the query path, we read them from
disk in order to not block job execution due to locks.

All these point to the fact that the integration of both queries and
job execution in the same process (multi-threaded) creates more
problems than advantages, and hence we should look into separating

Proposed design

In Ganeti 2.7, we will introduce a separate, optional daemon to handle
queries (note: whether this is an actual "new" daemon, or its
functionality is folded into confd, remains to be seen).

This daemon will expose exactly the same Luxi interface as masterd,
except that job submission will be disabled. If so configured (at
build time), clients will be changed to:

- keep sending REQ_SUBMIT_JOB, REQ_SUBMIT_MANY_JOBS, and all requests
  except REQ_QUERY_* to the masterd socket (but also QR_LOCK)
- redirect all REQ_QUERY_* requests to the new Luxi socket of the new
  daemon (except generic query with QR_LOCK)

This new daemon will serve both pure configuration queries (which
confd can already serve), and run-time queries (which currently only
masterd can serve). Since the RPC can be done from any node to any
node, the new daemon can run on all master candidates, not only on the
master node. This means that all gnt-* list options can be now run on
other nodes than the master node. If we implement this as a separate
daemon that talks to confd, then we could actually run this on all
nodes of the cluster (to be decided).

During the 2.7 release, masterd will still respond to queries itself,
but it will log all such queries for identification of "misbehaving"


As far as I can see, this will bring some significant advantages.

First, we remove any interaction between the job execution and cluster
query state. This means that bugs in the locking code (job execution)
will not impact the query of the cluster state, nor the query of the
job execution itself. Furthermore, we will be able to have different
tuning parameters between job execution (e.g. 25 threads for job
execution) versus query (since these are transient, we could
practically have unlimited numbers of query threads).

As a result of the above split, we move from the current model, where
shutdown of the master daemon practically "breaks" the entire Ganeti
functionality (no job execution nor queries, not even connecting to
the instance console), to a split model:

- if just masterd is stopped, then other cluster functionality remains
  available: listing instances, connecting to the console of an
  instance, etc.
Thomas Thrainer's avatar
Thomas Thrainer committed
- if just "luxid" is stopped, masterd can still process jobs, and one
81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131
  can furthermore run queries from other nodes (MCs)
- only if both are stopped, we end up with the previous state

This will help, for example, in the case where the master node has
crashed and we haven't failed it over yet: querying and investigating
the cluster state will still be possible from other master candidates
(on small clusters, this will mean from all nodes).

A last advantage is that we finally will be able to reduce the
footprint of masterd; instead of previous discussion of splitting
individual jobs, which requires duplication of all the base
functionality, this will just split the queries, a more trivial piece
of code than job execution. This should be a reasonable work effort,
with a much smaller impact in case of failure (we can still run
masterd as before).


We might get increased inconsistency during queries, as there will be
a delay between masterd saving an updated configuration and
confd/query loading and parsing it. However, this could be compensated
by the fact that queries will only look at "snapshots" of the
configuration, whereas before it could also look at "in-progress"
modifications (due to the non-atomic updates). I think these will
cancel each other out, we will have to see in practice how it works.

Another disadvantage *might* be that we have a more complex setup, due
to the introduction of a new daemon. However, the query path will be
much simpler, and when we remove the query functionality from masterd
we should have a more robust system.

Finally, we have QR_LOCK, which is an internal query related to the
master daemon, using the same infrastructure as the other queries
(related to cluster state). This is unfortunate, and will require
untangling in order to keep code duplication low.

Long-term plans

If this works well, the plan would be (tentatively) to disable the
query functionality in masterd completely in Ganeti 2.8, in order to
remove the duplication. This might change based on how/if we split the
configuration/locking daemon out, or not.

Once we split this out, there is not technical reason why we can't
execute any query from any node; except maybe practical reasons
(network topology, remote nodes, etc.) or security reasons (if/whether
we want to change the cluster security model). In any case, it should
be possible to do this in a reliable way from all master candidates.

132 133 134 135 136
Update: We decided to keep the restriction to run queries on the master
node. The reason is that it is confusing from a usability point of view
that querying will work on any node and suddenly, when the user tries
to submit a job, it won't work.

137 138 139 140 141 142 143 144 145 146 147 148
Some implementation details

We will fold this in confd, at least initially, to reduce the
proliferation of daemons. Haskell will limit (if used properly) any too
deep integration between the old "confd" functionality and the new query
one. As advantages, we'll have a single daemons that handles
configuration queries.

The redirection of Luxi requests can be easily done based on the
request type, if we have both sockets open, or if we open on demand.

Thomas Thrainer's avatar
Thomas Thrainer committed
We don't want the masterd to talk to the luxid itself (hidden
150 151 152 153
redirection), since we want to be able to run queries while masterd is

During the 2.7 release cycle, we can test all queries against both
Thomas Thrainer's avatar
Thomas Thrainer committed
masterd and luxid in QA, so we know we have exactly the same
155 156 157 158 159 160 161
interface and it is consistent.

.. vim: set textwidth=72 :
.. Local Variables:
.. mode: rst
.. fill-column: 72
.. End: