hspace.rst 13.7 KB
Newer Older
1 2
HSPACE(1) Ganeti | Version @GANETI_VERSION@
3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33


hspace - Cluster space analyzer for Ganeti


**hspace** {backend options...} [algorithm options...] [request options...]
[ -p [*fields*] ] [-v... | -q]

**hspace** --version

Backend options:

{ **-m** *cluster* | **-L[** *path* **] [-X]** | **-t** *data-file* |
**--simulate** *spec* }

Algorithm options:

**[ --max-cpu *cpu-ratio* ]**
**[ --min-disk *disk-ratio* ]**
**[ -O *name...* ]**

Request options:

**[--memory** *mem* **]**
**[--disk** *disk* **]**
**[--disk-template** *template* **]**
35 36 37 38 39 40 41 42 43 44 45 46 47 48
**[--vcpus** *vcpus* **]**
**[--tiered-alloc** *spec* **]**


hspace computes how many additional instances can be fit on a cluster,
while maintaining N+1 status.

The program will try to place instances, all of the same size, on the
cluster, until the point where we don't have any N+1 possible
allocation. It uses the exact same allocation algorithm as the hail
iallocator plugin in *allocate* mode.
50 51 52 53 54 55 56 57 58

The output of the program is designed to interpreted as a shell
fragment (or parsed as a *key=value* file). Options which extend the
output (e.g. -p, -v) will output the additional information on stderr
(such that the stdout is still parseable).

The following keys are available in the output of the script (all
prefixed with *HTS_*):

  These represent the specifications of the instance model used for
  allocation (the memory, disk, cpu, requested nodes, disk template).
62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193

  These represent the total memory, disk, CPU count and total nodes in
  the cluster.

  These are the initial (current) and final cluster score (see the hbal
  man page for details about the scoring algorithm).

  The initial and final instance count.

  The initial and final total free memory in the cluster (but this
  doesn't necessarily mean available for use).

  The initial and final total available memory for allocation in the
  cluster. If allocating redundant instances, new instances could
  increase the reserved memory so it doesn't necessarily mean the
  entirety of this memory can be used for new instance allocations.

  The initial and final reserved memory (for redundancy/N+1 purposes).

  The initial and final memory used for instances (actual runtime used

  The initial and final memory overhead--memory used for the node
  itself and unacounted memory (e.g. due to hypervisor overhead).

  The initial and final memory efficiency, represented as instance
  memory divided by total memory.

  Initial disk stats, similar to the memory ones.

  Final disk stats, similar to the memory ones.

  Initial and final number of virtual CPUs used by instances.

  The initial and final CPU efficiency, represented as the count of
  virtual instance CPUs divided by the total physical CPU count.

  The initial and final maximum per-node available memory. This is not
  very useful as a metric but can give an impression of the status of
  the nodes; as an example, this value restricts the maximum instance
  size that can be still created on the cluster.

  Like the above but for disk.

  If the tiered allocation mode has been enabled, this parameter holds
  the pairs of specifications and counts of instances that can be
  created in this mode. The value of the key is a space-separated list
  of values; each value is of the form *memory,disk,vcpu=count* where
  the memory, disk and vcpu are the values for the current spec, and
  count is how many instances of this spec can be created. A complete
  value for this variable could be: **4096,102400,2=225
  2560,102400,2=20 512,102400,2=21**.

  These represents the metrics of used resources at the start of the
  computation (only for tiered allocation mode). The NPU value is
  "normalized" CPU count, i.e. the number of virtual CPUs divided by
  the maximum ratio of the virtual to physical CPUs.

  These represents the total resources allocated during the tiered
  allocation process. In effect, they represent how much is readily
  available for allocation.

  These represents the resources left over (either free as in
  unallocable or allocable on their own) after the tiered allocation
  has been completed. They represent better the actual unallocable
  resources, because some other resource has been exhausted. For
  example, the cluster might still have 100GiB disk free, but with no
  memory left for instances, we cannot allocate another instance, so
  in effect the disk space is unallocable. Note that the CPUs here
  represent instance virtual CPUs, and in case the *--max-cpu* option
  hasn't been specified this will be -1.

  The current usage represented as initial number of instances divided
  per final number of instances.

  The number of instances allocated (delta between FIN_INST_CNT and

  For the last attemp at allocations (which would have increased
  FIN_INST_CNT with one, if it had succeeded), this is the count of
  the failure reasons per failure type; currently defined are FAILMEM,
  FAILDISK and FAILCPU which represent errors due to not enough
  memory, disk and CPUs, and FAILN1 which represents a non N+1
  compliant cluster on which we can't allocate instances at all.

  The reason for most of the failures, being one of the above FAIL*

  A marker representing the successful end of the computation, and
  having value "1". If this key is not present in the output it means
  that the computation failed and any values present should not be
  relied upon.

If the tiered allocation mode is enabled, then many of the INI_/FIN_
metrics will be also displayed with a TRL_ prefix, and denote the
cluster status at the end of the tiered allocation run.


The options that can be passed to the program are as follows:

--memory *mem*
  The memory size of the instances to be placed (defaults to 4GiB).

--disk *disk*
  The disk size of the instances to be placed (defaults to 100GiB).

194 195 196
--disk-template *template*
  The disk template for the instance; one of the Ganeti disk templates
  (e.g. plain, drbd, so on) should be passed in.
197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335

--vcpus *vcpus*
  The number of VCPUs of the instances to be placed (defaults to 1).

  The maximum virtual to physical cpu ratio, as a floating point
  number between zero and one. For example, specifying *cpu-ratio* as
  **2.5** means that, for a 4-cpu machine, a maximum of 10 virtual
  cpus should be allowed to be in use for primary instances. A value
  of one doesn't make sense though, as that means no disk space can be
  used on it.

  The minimum amount of free disk space remaining, as a floating point
  number. For example, specifying *disk-ratio* as **0.25** means that
  at least one quarter of disk space should be left free on nodes.

-p, --print-nodes
  Prints the before and after node status, in a format designed to
  allow the user to understand the node's most important parameters.

  It is possible to customise the listed information by passing a
  comma-separated list of field names to this option (the field list
  is currently undocumented), or to extend the default field list by
  prefixing the additional field list with a plus sign. By default,
  the node list will contain the following information:

    a character denoting the status of the node, with '-' meaning an
    offline node, '*' meaning N+1 failure and blank meaning a good

    the node name

    the total node memory

    the memory used by the node itself

    the memory used by instances

    amount memory which seems to be in use but cannot be determined
    why or by which instance; usually this means that the hypervisor
    has some overhead or that there are other reporting errors

    the free node memory

    the reserved node memory, which is the amount of free memory
    needed for N+1 compliance

    total disk

    free disk

    the number of physical cpus on the node

    the number of virtual cpus allocated to primary instances

    number of primary instances

    number of secondary instances

    percent of free memory

    percent of free disk

    ratio of virtual to physical cpus

    the dynamic CPU load (if the information is available)

    the dynamic memory load (if the information is available)

    the dynamic disk load (if the information is available)

    the dynamic net load (if the information is available)

-O *name*
  This option (which can be given multiple times) will mark nodes as
  being *offline*. This means a couple of things:

  - instances won't be placed on these nodes, not even temporarily;
    e.g. the *replace primary* move is not available if the secondary
    node is offline, since this move requires a failover.
  - these nodes will not be included in the score calculation (except
    for the percentage of instances on offline nodes)

  Note that the algorithm will also mark as offline any nodes which
  are reported by RAPI as such, or that have "?" in file-based input
  in any numeric fields.

-t *datafile*, --text-data=*datafile*
  The name of the file holding node and instance information (if not
  collecting via RAPI or LUXI). This or one of the other backends must
  be selected.

-S *filename*, --save-cluster=*filename*
  If given, the state of the cluster at the end of the allocation is
  saved to a file named *filename.alloc*, and if tiered allocation is
  enabled, the state after tiered allocation will be saved to
  *filename.tiered*. This allows re-feeding the cluster state to
  either hspace itself (with different parameters) or for example

-m *cluster*
 Collect data directly from the *cluster* given as an argument via
 RAPI. If the argument doesn't contain a colon (:), then it is
 converted into a fully-built URL via prepending ``https://`` and
 appending the default RAPI port, otherwise it's considered a
 fully-specified URL and is used as-is.

-L [*path*]
  Collect data directly from the master daemon, which is to be
  contacted via the luxi (an internal Ganeti protocol). An optional
  *path* argument is interpreted as the path to the unix socket on
  which the master daemon listens; otherwise, the default path used by
  ganeti when installed with *--localstatedir=/var* is used.

--simulate *description*
  Instead of using actual data, build an empty cluster given a node
  description. The *description* parameter must be a comma-separated
  list of five elements, describing in order:

  - the allocation policy for this node group
339 340 341 342 343
  - the number of nodes in the cluster
  - the disk size of the nodes, in mebibytes
  - the memory size of the nodes, in mebibytes
  - the cpu core count for the nodes

344 345 346 347 348 349 350 351
  An example description would be **preferred,B20,102400,16384,4**
  describing a 20-node cluster where each node has 100GiB of disk
  space, 16GiB of memory and 4 CPU cores. Note that all nodes must
  have the same specs currently.

  This option can be given multiple times, and each new use defines a
  new node group. Hence different node groups can have different
  allocation policies and node count/specifications.
352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403

--tiered-alloc *spec*
  Besides the standard, fixed-size allocation, also do a tiered
  allocation scheme where the algorithm starts from the given
  specification and allocates until there is no more space; then it
  decreases the specification and tries the allocation again. The
  decrease is done on the matric that last failed during
  allocation. The specification given is similar to the *--simulate*
  option and it holds:

  - the disk size of the instance
  - the memory size of the instance
  - the vcpu count for the insance

  An example description would be *10240,8192,2* describing an initial
  starting specification of 10GiB of disk space, 4GiB of memory and 2

  Also note that the normal allocation and the tiered allocation are
  independent, and both start from the initial cluster state; as such,
  the instance count for these two modes are not related one to

-v, --verbose
  Increase the output verbosity. Each usage of this option will
  increase the verbosity (currently more than 2 doesn't make sense)
  from the default of one.

-q, --quiet
  Decrease the output verbosity. Each usage of this option will
  decrease the verbosity (less than zero doesn't make sense) from the
  default of one.

-V, --version
  Just show the program version and exit.


The exist status of the command will be zero, unless for some reason
the algorithm fatally failed (e.g. wrong node or instance data).


The algorithm is highly dependent on the number of nodes; its runtime
grows exponentially with this number, and as such is impractical for
really big clusters.

The algorithm doesn't rebalance the cluster or try to get the optimal
fit; it just allocates in the best place for the current step, without
taking into consideration the impact on future placements.
404 405 406 407 408 409

.. vim: set textwidth=72 :
.. Local Variables:
.. mode: rst
.. fill-column: 72
.. End: