hspace.rst 13.7 KB
Newer Older
1 2
HSPACE(1) Ganeti | Version @GANETI_VERSION@
===========================================
3 4 5 6 7 8 9 10 11 12

NAME
----

hspace - Cluster space analyzer for Ganeti

SYNOPSIS
--------

**hspace** {backend options...} [algorithm options...] [request options...]
13
[output options...] [-v... | -q]
14

15
**hspace** \--version
16 17 18 19

Backend options:

{ **-m** *cluster* | **-L[** *path* **] [-X]** | **-t** *data-file* |
20
**\--simulate** *spec* | **-I** *path* }
21 22 23 24


Algorithm options:

25 26
**[ \--max-cpu *cpu-ratio* ]**
**[ \--min-disk *disk-ratio* ]**
27 28 29 30 31
**[ -O *name...* ]**


Request options:

32
**[\--disk-template** *template* **]**
33

34
**[\--standard-alloc** *disk,ram,cpu*  **]**
35

36
**[\--tiered-alloc** *disk,ram,cpu* **]**
37

38 39
Output options:

40
**[\--machine-readable**[=*CHOICE*] **]**
41 42
**[-p**[*fields*]**]**

43 44 45 46 47 48 49 50 51 52

DESCRIPTION
-----------

hspace computes how many additional instances can be fit on a cluster,
while maintaining N+1 status.

The program will try to place instances, all of the same size, on the
cluster, until the point where we don't have any N+1 possible
allocation. It uses the exact same allocation algorithm as the hail
53
iallocator plugin in *allocate* mode.
54

55 56 57 58 59 60 61
The output of the program is designed either for human consumption (the
default) or, when enabled with the ``--machine-readable`` option
(described further below), for machine consumption. In the latter case,
it is intended to interpreted as a shell fragment (or parsed as a
*key=value* file). Options which extend the output (e.g. -p, -v) will
output the additional information on stderr (such that the stdout is
still parseable).
62

63 64 65 66
By default, the instance specifications will be read from the cluster;
the options ``--standard-alloc`` and ``--tiered-alloc`` can be used to
override them.

67 68
The following keys are available in the machine-readable output of the
script (all prefixed with *HTS_*):
69

70
SPEC_MEM, SPEC_DSK, SPEC_CPU, SPEC_RQN, SPEC_DISK_TEMPLATE
71
  These represent the specifications of the instance model used for
72
  allocation (the memory, disk, cpu, requested nodes, disk template).
73

74
TSPEC_INI_MEM, TSPEC_INI_DSK, TSPEC_INI_CPU, ...
75 76 77 78
  Only defined when the tiered mode allocation is enabled, these are
  similar to the above specifications but show the initial starting spec
  for tiered allocation.

79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107
CLUSTER_MEM, CLUSTER_DSK, CLUSTER_CPU, CLUSTER_NODES
  These represent the total memory, disk, CPU count and total nodes in
  the cluster.

INI_SCORE, FIN_SCORE
  These are the initial (current) and final cluster score (see the hbal
  man page for details about the scoring algorithm).

INI_INST_CNT, FIN_INST_CNT
  The initial and final instance count.

INI_MEM_FREE, FIN_MEM_FREE
  The initial and final total free memory in the cluster (but this
  doesn't necessarily mean available for use).

INI_MEM_AVAIL, FIN_MEM_AVAIL
  The initial and final total available memory for allocation in the
  cluster. If allocating redundant instances, new instances could
  increase the reserved memory so it doesn't necessarily mean the
  entirety of this memory can be used for new instance allocations.

INI_MEM_RESVD, FIN_MEM_RESVD
  The initial and final reserved memory (for redundancy/N+1 purposes).

INI_MEM_INST, FIN_MEM_INST
  The initial and final memory used for instances (actual runtime used
  RAM).

INI_MEM_OVERHEAD, FIN_MEM_OVERHEAD
108
  The initial and final memory overhead, i.e. memory used for the node
109
  itself and unaccounted memory (e.g. due to hypervisor overhead).
110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137

INI_MEM_EFF, HTS_INI_MEM_EFF
  The initial and final memory efficiency, represented as instance
  memory divided by total memory.

INI_DSK_FREE, INI_DSK_AVAIL, INI_DSK_RESVD, INI_DSK_INST, INI_DSK_EFF
  Initial disk stats, similar to the memory ones.

FIN_DSK_FREE, FIN_DSK_AVAIL, FIN_DSK_RESVD, FIN_DSK_INST, FIN_DSK_EFF
  Final disk stats, similar to the memory ones.

INI_CPU_INST, FIN_CPU_INST
  Initial and final number of virtual CPUs used by instances.

INI_CPU_EFF, FIN_CPU_EFF
  The initial and final CPU efficiency, represented as the count of
  virtual instance CPUs divided by the total physical CPU count.

INI_MNODE_MEM_AVAIL, FIN_MNODE_MEM_AVAIL
  The initial and final maximum per-node available memory. This is not
  very useful as a metric but can give an impression of the status of
  the nodes; as an example, this value restricts the maximum instance
  size that can be still created on the cluster.

INI_MNODE_DSK_AVAIL, FIN_MNODE_DSK_AVAIL
  Like the above but for disk.

TSPEC
138 139 140 141 142 143 144
  This parameter holds the pairs of specifications and counts of
  instances that can be created in the *tiered allocation* mode. The
  value of the key is a space-separated list of values; each value is of
  the form *memory,disk,vcpu=count* where the memory, disk and vcpu are
  the values for the current spec, and count is how many instances of
  this spec can be created. A complete value for this variable could be:
  **4096,102400,2=225 2560,102400,2=20 512,102400,2=21**.
145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164

KM_USED_CPU, KM_USED_NPU, KM_USED_MEM, KM_USED_DSK
  These represents the metrics of used resources at the start of the
  computation (only for tiered allocation mode). The NPU value is
  "normalized" CPU count, i.e. the number of virtual CPUs divided by
  the maximum ratio of the virtual to physical CPUs.

KM_POOL_CPU, KM_POOL_NPU, KM_POOL_MEM, KM_POOL_DSK
  These represents the total resources allocated during the tiered
  allocation process. In effect, they represent how much is readily
  available for allocation.

KM_UNAV_CPU, KM_POOL_NPU, KM_UNAV_MEM, KM_UNAV_DSK
  These represents the resources left over (either free as in
  unallocable or allocable on their own) after the tiered allocation
  has been completed. They represent better the actual unallocable
  resources, because some other resource has been exhausted. For
  example, the cluster might still have 100GiB disk free, but with no
  memory left for instances, we cannot allocate another instance, so
  in effect the disk space is unallocable. Note that the CPUs here
165
  represent instance virtual CPUs, and in case the *\--max-cpu* option
166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193
  hasn't been specified this will be -1.

ALLOC_USAGE
  The current usage represented as initial number of instances divided
  per final number of instances.

ALLOC_COUNT
  The number of instances allocated (delta between FIN_INST_CNT and
  INI_INST_CNT).

ALLOC_FAIL*_CNT
  For the last attemp at allocations (which would have increased
  FIN_INST_CNT with one, if it had succeeded), this is the count of
  the failure reasons per failure type; currently defined are FAILMEM,
  FAILDISK and FAILCPU which represent errors due to not enough
  memory, disk and CPUs, and FAILN1 which represents a non N+1
  compliant cluster on which we can't allocate instances at all.

ALLOC_FAIL_REASON
  The reason for most of the failures, being one of the above FAIL*
  strings.

OK
  A marker representing the successful end of the computation, and
  having value "1". If this key is not present in the output it means
  that the computation failed and any values present should not be
  relied upon.

194 195
Many of the INI_/FIN_ metrics will be also displayed with a TRL_ prefix,
and denote the cluster status at the end of the tiered allocation run.
196

197 198 199
The human output format should be self-explanatory, so it is not
described further.

200 201 202 203 204
OPTIONS
-------

The options that can be passed to the program are as follows:

205
\--disk-template *template*
206 207 208
  Overrides the disk template for the instance read from the cluster;
  one of the Ganeti disk templates (e.g. plain, drbd, so on) should be
  passed in.
209

210 211 212 213 214
\--spindle-use *spindles*
  Override the spindle use for the instance read from the cluster. The
  value can be 0 (for example for instances that use very low I/O), but not
  negative. For shared storage the value is ignored.

215
\--max-cpu=*cpu-ratio*
216 217 218 219 220 221 222 223
  The maximum virtual to physical cpu ratio, as a floating point number
  greater than or equal to one. For example, specifying *cpu-ratio* as
  **2.5** means that, for a 4-cpu machine, a maximum of 10 virtual cpus
  should be allowed to be in use for primary instances. A value of
  exactly one means there will be no over-subscription of CPU (except
  for the CPU time used by the node itself), and values below one do not
  make sense, as that means other resources (e.g. disk) won't be fully
  utilised due to CPU restrictions.
224

225
\--min-disk=*disk-ratio*
226 227 228 229
  The minimum amount of free disk space remaining, as a floating point
  number. For example, specifying *disk-ratio* as **0.25** means that
  at least one quarter of disk space should be left free on nodes.

230
-l *rounds*, \--max-length=*rounds*
Iustin Pop's avatar
Iustin Pop committed
231 232 233 234
  Restrict the number of instance allocations to this length. This is
  not very useful in practice, but can be used for testing hspace
  itself, or to limit the runtime for very big clusters.

235
-p, \--print-nodes
236 237 238
  Prints the before and after node status, in a format designed to allow
  the user to understand the node's most important parameters. See the
  man page **htools**(1) for more details about this option.
239 240 241 242 243 244 245 246 247 248 249 250 251 252 253

-O *name*
  This option (which can be given multiple times) will mark nodes as
  being *offline*. This means a couple of things:

  - instances won't be placed on these nodes, not even temporarily;
    e.g. the *replace primary* move is not available if the secondary
    node is offline, since this move requires a failover.
  - these nodes will not be included in the score calculation (except
    for the percentage of instances on offline nodes)

  Note that the algorithm will also mark as offline any nodes which
  are reported by RAPI as such, or that have "?" in file-based input
  in any numeric fields.

254
-S *filename*, \--save-cluster=*filename*
255 256 257 258 259
  If given, the state of the cluster at the end of the allocation is
  saved to a file named *filename.alloc*, and if tiered allocation is
  enabled, the state after tiered allocation will be saved to
  *filename.tiered*. This allows re-feeding the cluster state to
  either hspace itself (with different parameters) or for example
Iustin Pop's avatar
Iustin Pop committed
260 261
  hbal, via the ``-t`` option.

262
-t *datafile*, \--text-data=*datafile*
Iustin Pop's avatar
Iustin Pop committed
263 264 265 266
  Backend specification: the name of the file holding node and instance
  information (if not collecting via RAPI or LUXI). This or one of the
  other backends must be selected. The option is described in the man
  page **htools**(1).
267 268

-m *cluster*
Iustin Pop's avatar
Iustin Pop committed
269 270 271
  Backend specification: collect data directly from the *cluster* given
  as an argument via RAPI. The option is described in the man page
  **htools**(1).
272 273

-L [*path*]
Iustin Pop's avatar
Iustin Pop committed
274 275 276
  Backend specification: collect data directly from the master daemon,
  which is to be contacted via LUXI (an internal Ganeti protocol). The
  option is described in the man page **htools**(1).
277

278
\--simulate *description*
Iustin Pop's avatar
Iustin Pop committed
279 280 281
  Backend specification: similar to the **-t** option, this allows
  overriding the cluster data with a simulated cluster. For details
  about the description, see the man page **htools**(1).
282

283
\--standard-alloc *disk,ram,cpu*
284 285 286
  This option overrides the instance size read from the cluster for the
  *standard* allocation mode, where we simply allocate instances of the
  same, fixed size until the cluster runs out of space.
287

288
  The specification given is similar to the *\--simulate* option and it
289
  holds:
290

291 292
  - the disk size of the instance (units can be used)
  - the memory size of the instance (units can be used)
293 294
  - the vcpu count for the insance

295 296 297
  An example description would be *100G,4g,2* describing an instance
  specification of 100GB of disk space, 4GiB of memory and 2 VCPUs.

298
\--tiered-alloc *disk,ram,cpu*
299 300 301 302 303 304
  This option overrides the instance size for the *tiered* allocation
  mode. In this mode, the algorithm starts from the given specification
  and allocates until there is no more space; then it decreases the
  specification and tries the allocation again. The decrease is done on
  the metric that last failed during allocation. The argument should
  have the same format as for ``--standard-alloc``.
305 306 307 308 309 310

  Also note that the normal allocation and the tiered allocation are
  independent, and both start from the initial cluster state; as such,
  the instance count for these two modes are not related one to
  another.

311
\--machine-readable[=*choice*]
312 313 314 315 316 317
  By default, the output of the program is in "human-readable" format,
  i.e. text descriptions. By passing this flag you can either enable
  (``--machine-readable`` or ``--machine-readable=yes``) or explicitly
  disable (``--machine-readable=no``) the machine readable format
  described above.

318
-v, \--verbose
319 320 321 322
  Increase the output verbosity. Each usage of this option will
  increase the verbosity (currently more than 2 doesn't make sense)
  from the default of one.

323
-q, \--quiet
324 325 326 327
  Decrease the output verbosity. Each usage of this option will
  decrease the verbosity (less than zero doesn't make sense) from the
  default of one.

328
-V, \--version
329 330
  Just show the program version and exit.

331 332 333 334 335 336 337 338 339 340 341 342 343
UNITS
~~~~~

By default, all unit-accepting options use mebibytes. Using the
lower-case letters of *m*, *g* and *t* (or their longer equivalents of
*mib*, *gib*, *tib*, for which case doesn't matter) explicit binary
units can be selected. Units in the SI system can be selected using the
upper-case letters of *M*, *G* and *T* (or their longer equivalents of
*MB*, *GB*, *TB*, for which case doesn't matter).

More details about the difference between the SI and binary systems can
be read in the *units(7)* man page.

344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359
EXIT STATUS
-----------

The exist status of the command will be zero, unless for some reason
the algorithm fatally failed (e.g. wrong node or instance data).

BUGS
----

The algorithm is highly dependent on the number of nodes; its runtime
grows exponentially with this number, and as such is impractical for
really big clusters.

The algorithm doesn't rebalance the cluster or try to get the optimal
fit; it just allocates in the best place for the current step, without
taking into consideration the impact on future placements.
360 361 362 363 364 365

.. vim: set textwidth=72 :
.. Local Variables:
.. mode: rst
.. fill-column: 72
.. End: