Commit 7faf5110 authored by Michael Hanselmann's avatar Michael Hanselmann

Wrap documentation to max 72 characters per line

Signed-off-by: default avatarMichael Hanselmann <hansmi@google.com>
Reviewed-by: default avatarIustin Pop <iustin@google.com>
parent 558fd122
...@@ -7,8 +7,8 @@ Version 2.0.3 ...@@ -7,8 +7,8 @@ Version 2.0.3
- Added ``--ignore-size`` to the ``gnt-instance activate-disks`` command - Added ``--ignore-size`` to the ``gnt-instance activate-disks`` command
to allow using the pre-2.0.2 behaviour in activation, if any existing to allow using the pre-2.0.2 behaviour in activation, if any existing
instances have mismatched disk sizes in the configuration instances have mismatched disk sizes in the configuration
- Added ``gnt-cluster repair-disk-sizes`` command to check and update any - Added ``gnt-cluster repair-disk-sizes`` command to check and update
configuration mismatches for disk sizes any configuration mismatches for disk sizes
- Added ``gnt-master cluste-failover --no-voting`` to allow master - Added ``gnt-master cluste-failover --no-voting`` to allow master
failover to work on two-node clusters failover to work on two-node clusters
- Fixed the ‘--net’ option of ``gnt-backup import``, which was unusable - Fixed the ‘--net’ option of ``gnt-backup import``, which was unusable
...@@ -61,9 +61,9 @@ Version 2.0.1 ...@@ -61,9 +61,9 @@ Version 2.0.1
- the watcher now also restarts the node daemon and the rapi daemon if - the watcher now also restarts the node daemon and the rapi daemon if
they died they died
- fixed the watcher to handle full and drained queue cases - fixed the watcher to handle full and drained queue cases
- hooks export more instance data in the environment, which helps if hook - hooks export more instance data in the environment, which helps if
scripts need to take action based on the instance's properties (no hook scripts need to take action based on the instance's properties
longer need to query back into ganeti) (no longer need to query back into ganeti)
- instance failovers when the instance is stopped do not check for free - instance failovers when the instance is stopped do not check for free
RAM, so that failing over a stopped instance is possible in low memory RAM, so that failing over a stopped instance is possible in low memory
situations situations
...@@ -169,10 +169,10 @@ Version 2.0 beta 1 ...@@ -169,10 +169,10 @@ Version 2.0 beta 1
- all commands are executed by a daemon (``ganeti-masterd``) and the - all commands are executed by a daemon (``ganeti-masterd``) and the
various ``gnt-*`` commands are just front-ends to it various ``gnt-*`` commands are just front-ends to it
- all the commands are entered into, and executed from a job queue, see - all the commands are entered into, and executed from a job queue,
the ``gnt-job(8)`` manpage see the ``gnt-job(8)`` manpage
- the RAPI daemon supports read-write operations, secured by basic HTTP - the RAPI daemon supports read-write operations, secured by basic
authentication on top of HTTPS HTTP authentication on top of HTTPS
- DRBD version 0.7 support has been removed, DRBD 8 is the only - DRBD version 0.7 support has been removed, DRBD 8 is the only
supported version (when migrating from Ganeti 1.2 to 2.0, you need supported version (when migrating from Ganeti 1.2 to 2.0, you need
to migrate to DRBD 8 first while still running Ganeti 1.2) to migrate to DRBD 8 first while still running Ganeti 1.2)
...@@ -193,8 +193,8 @@ Version 1.2.7 ...@@ -193,8 +193,8 @@ Version 1.2.7
- Change the default reboot type in ``gnt-instance reboot`` to "hard" - Change the default reboot type in ``gnt-instance reboot`` to "hard"
- Reuse the old instance mac address by default on instance import, if - Reuse the old instance mac address by default on instance import, if
the instance name is the same. the instance name is the same.
- Handle situations in which the node info rpc returns incomplete results - Handle situations in which the node info rpc returns incomplete
(issue 46) results (issue 46)
- Add checks for tcp/udp ports collisions in ``gnt-cluster verify`` - Add checks for tcp/udp ports collisions in ``gnt-cluster verify``
- Improved version of batcher: - Improved version of batcher:
...@@ -218,10 +218,10 @@ Version 1.2.6 ...@@ -218,10 +218,10 @@ Version 1.2.6
- new ``--hvm-nic-type`` and ``--hvm-disk-type`` flags to control the - new ``--hvm-nic-type`` and ``--hvm-disk-type`` flags to control the
type of disk exported to fully virtualized instances. type of disk exported to fully virtualized instances.
- provide access to the serial console of HVM instances - provide access to the serial console of HVM instances
- instance auto_balance flag, set by default. If turned off it will avoid - instance auto_balance flag, set by default. If turned off it will
warnings on cluster verify if there is not enough memory to fail over avoid warnings on cluster verify if there is not enough memory to fail
an instance. in the future it will prevent automatically failing it over an instance. in the future it will prevent automatically failing
over when we will support that. it over when we will support that.
- batcher tool for instance creation, see ``tools/README.batcher`` - batcher tool for instance creation, see ``tools/README.batcher``
- ``gnt-instance reinstall --select-os`` to interactively select a new - ``gnt-instance reinstall --select-os`` to interactively select a new
operating system when reinstalling an instance. operating system when reinstalling an instance.
...@@ -347,8 +347,8 @@ Version 1.2.1 ...@@ -347,8 +347,8 @@ Version 1.2.1
Version 1.2.0 Version 1.2.0
------------- -------------
- Log the ``xm create`` output to the node daemon log on failure (to help - Log the ``xm create`` output to the node daemon log on failure (to
diagnosing the error) help diagnosing the error)
- In debug mode, log all external commands output if failed to the logs - In debug mode, log all external commands output if failed to the logs
- Change parsing of lvm commands to ignore stderr - Change parsing of lvm commands to ignore stderr
...@@ -384,8 +384,8 @@ Version 1.2b2 ...@@ -384,8 +384,8 @@ Version 1.2b2
reboots reboots
- Removed dependency on debian's patched fping that uses the - Removed dependency on debian's patched fping that uses the
non-standard ``-S`` option non-standard ``-S`` option
- Now the OS definitions are searched for in multiple, configurable paths - Now the OS definitions are searched for in multiple, configurable
(easier for distros to package) paths (easier for distros to package)
- Some changes to the hooks infrastructure (especially the new - Some changes to the hooks infrastructure (especially the new
post-configuration update hook) post-configuration update hook)
- Other small bugfixes - Other small bugfixes
......
...@@ -343,7 +343,8 @@ At this point, the machines are ready for a cluster creation; in case ...@@ -343,7 +343,8 @@ At this point, the machines are ready for a cluster creation; in case
you want to remove Ganeti completely, you need to also undo some of you want to remove Ganeti completely, you need to also undo some of
the SSH changes and log directories: the SSH changes and log directories:
- ``rm -rf /var/log/ganeti /srv/ganeti`` (replace with the correct paths) - ``rm -rf /var/log/ganeti /srv/ganeti`` (replace with the correct
paths)
- remove from ``/root/.ssh`` the keys that Ganeti added (check - remove from ``/root/.ssh`` the keys that Ganeti added (check
the ``authorized_keys`` and ``id_dsa`` files) the ``authorized_keys`` and ``id_dsa`` files)
- regenerate the host's SSH keys (check the OpenSSH startup scripts) - regenerate the host's SSH keys (check the OpenSSH startup scripts)
......
This diff is collapsed.
This diff is collapsed.
...@@ -16,8 +16,8 @@ Glossary ...@@ -16,8 +16,8 @@ Glossary
the startup of an instance. the startup of an instance.
OpCode OpCode
A data structure encapsulating a basic cluster operation; for example, A data structure encapsulating a basic cluster operation; for
start instance, add instance, etc. example, start instance, add instance, etc.
PVM PVM
Para-virtualization mode, where the virtual machine knows it's being Para-virtualization mode, where the virtual machine knows it's being
......
...@@ -128,8 +128,9 @@ Adds a node to the cluster. ...@@ -128,8 +128,9 @@ Adds a node to the cluster.
OP_REMOVE_NODE OP_REMOVE_NODE
++++++++++++++ ++++++++++++++
Removes a node from the cluster. On the removed node the hooks are called Removes a node from the cluster. On the removed node the hooks are
during the execution of the operation and not after its completion. called during the execution of the operation and not after its
completion.
:directory: node-remove :directory: node-remove
:env. vars: NODE_NAME :env. vars: NODE_NAME
...@@ -350,7 +351,8 @@ Cluster operations ...@@ -350,7 +351,8 @@ Cluster operations
OP_POST_INIT_CLUSTER OP_POST_INIT_CLUSTER
++++++++++++++++++++ ++++++++++++++++++++
This hook is called via a special "empty" LU right after cluster initialization. This hook is called via a special "empty" LU right after cluster
initialization.
:directory: cluster-init :directory: cluster-init
:env. vars: none :env. vars: none
...@@ -360,8 +362,8 @@ This hook is called via a special "empty" LU right after cluster initialization. ...@@ -360,8 +362,8 @@ This hook is called via a special "empty" LU right after cluster initialization.
OP_DESTROY_CLUSTER OP_DESTROY_CLUSTER
++++++++++++++++++ ++++++++++++++++++
The post phase of this hook is called during the execution of destroy operation The post phase of this hook is called during the execution of destroy
and not after its completion. operation and not after its completion.
:directory: cluster-destroy :directory: cluster-destroy
:env. vars: none :env. vars: none
......
...@@ -225,9 +225,10 @@ nodes ...@@ -225,9 +225,10 @@ nodes
or ``offline`` flags set. More details about these of node status or ``offline`` flags set. More details about these of node status
flags is available in the manpage :manpage:`ganeti(7)`. flags is available in the manpage :manpage:`ganeti(7)`.
.. [*] Note that no run-time data is present for offline or drained nodes; .. [*] Note that no run-time data is present for offline or drained
this means the tags total_memory, reserved_memory, free_memory, total_disk, nodes; this means the tags total_memory, reserved_memory,
free_disk, total_cpus, i_pri_memory and i_pri_up memory will be absent free_memory, total_disk, free_disk, total_cpus, i_pri_memory and
i_pri_up memory will be absent
Response message Response message
......
...@@ -108,20 +108,21 @@ and not just *node1*. ...@@ -108,20 +108,21 @@ and not just *node1*.
.. admonition:: Why a fully qualified host name .. admonition:: Why a fully qualified host name
Although most distributions use only the short name in the /etc/hostname Although most distributions use only the short name in the
file, we still think Ganeti nodes should use the full name. The reason for /etc/hostname file, we still think Ganeti nodes should use the full
this is that calling 'hostname --fqdn' requires the resolver library to work name. The reason for this is that calling 'hostname --fqdn' requires
and is a 'guess' via heuristics at what is your domain name. Since Ganeti the resolver library to work and is a 'guess' via heuristics at what
can be used among other things to host DNS servers, we don't want to depend is your domain name. Since Ganeti can be used among other things to
on them as much as possible, and we'd rather have the uname() syscall return host DNS servers, we don't want to depend on them as much as
the full node name. possible, and we'd rather have the uname() syscall return the full
node name.
We haven't ever found any breakage in using a full hostname on a Linux
system, and anyway we recommend to have only a minimal installation on We haven't ever found any breakage in using a full hostname on a
Ganeti nodes, and to use instances (or other dedicated machines) to run the Linux system, and anyway we recommend to have only a minimal
rest of your network services. By doing this you can change the installation on Ganeti nodes, and to use instances (or other
/etc/hostname file to contain an FQDN without the fear of breaking anything dedicated machines) to run the rest of your network services. By
unrelated. doing this you can change the /etc/hostname file to contain an FQDN
without the fear of breaking anything unrelated.
Installing The Hypervisor Installing The Hypervisor
...@@ -130,9 +131,9 @@ Installing The Hypervisor ...@@ -130,9 +131,9 @@ Installing The Hypervisor
**Mandatory** on all nodes. **Mandatory** on all nodes.
While Ganeti is developed with the ability to modularly run on different While Ganeti is developed with the ability to modularly run on different
virtualization environments in mind the only two currently useable on a live virtualization environments in mind the only two currently useable on a
system are Xen and KVM. Supported Xen versions are: 3.0.3, 3.0.4 and 3.1. live system are Xen and KVM. Supported Xen versions are: 3.0.3, 3.0.4
Supported KVM version are 72 and above. and 3.1. Supported KVM version are 72 and above.
Please follow your distribution's recommended way to install and set Please follow your distribution's recommended way to install and set
up Xen, or install Xen from the upstream source, if you wish, up Xen, or install Xen from the upstream source, if you wish,
...@@ -140,9 +141,9 @@ following their manual. For KVM, make sure you have a KVM-enabled ...@@ -140,9 +141,9 @@ following their manual. For KVM, make sure you have a KVM-enabled
kernel and the KVM tools. kernel and the KVM tools.
After installing Xen, you need to reboot into your new system. On some After installing Xen, you need to reboot into your new system. On some
distributions this might involve configuring GRUB appropriately, whereas others distributions this might involve configuring GRUB appropriately, whereas
will configure it automatically when you install the respective kernels. For others will configure it automatically when you install the respective
KVM no reboot should be necessary. kernels. For KVM no reboot should be necessary.
.. admonition:: Xen on Debian .. admonition:: Xen on Debian
...@@ -315,8 +316,8 @@ them will already be installed on a standard machine. ...@@ -315,8 +316,8 @@ them will already be installed on a standard machine.
You can use this command line to install all needed packages:: You can use this command line to install all needed packages::
# apt-get install lvm2 ssh bridge-utils iproute iputils-arping \ # apt-get install lvm2 ssh bridge-utils iproute iputils-arping \
python python-pyopenssl openssl python-pyparsing python-simplejson \ python python-pyopenssl openssl python-pyparsing \
python-pyinotify python-simplejson python-pyinotify
Setting up the environment for Ganeti Setting up the environment for Ganeti
------------------------------------- -------------------------------------
...@@ -326,34 +327,38 @@ Configuring the network ...@@ -326,34 +327,38 @@ Configuring the network
**Mandatory** on all nodes. **Mandatory** on all nodes.
You can run Ganeti either in "bridge mode" or in "routed mode". In bridge You can run Ganeti either in "bridge mode" or in "routed mode". In
mode, the default, the instances network interfaces will be attached to a bridge mode, the default, the instances network interfaces will be
software bridge running in dom0. Xen by default creates such a bridge at attached to a software bridge running in dom0. Xen by default creates
startup, but your distribution might have a different way to do things, and such a bridge at startup, but your distribution might have a different
you'll definitely need to manually set it up under KVM. way to do things, and you'll definitely need to manually set it up under
KVM.
Beware that the default name Ganeti uses is ``xen-br0`` (which was Beware that the default name Ganeti uses is ``xen-br0`` (which was
used in Xen 2.0) while Xen 3.0 uses ``xenbr0`` by default. The default used in Xen 2.0) while Xen 3.0 uses ``xenbr0`` by default. The default
bridge your Ganeti cluster will use for new instances can be specified bridge your Ganeti cluster will use for new instances can be specified
at cluster initialization time. at cluster initialization time.
If you want to run in "routing mode" you need to specify that at cluster init If you want to run in "routing mode" you need to specify that at cluster
time (using the --nicparam option), and then no bridge will be needed. In init time (using the --nicparam option), and then no bridge will be
this mode instance traffic will be routed by dom0, instead of bridged. needed. In this mode instance traffic will be routed by dom0, instead of
bridged.
In order to use "routing mode" under Xen, you'll need to change the relevant In order to use "routing mode" under Xen, you'll need to change the
parameters in the Xen config file. Under KVM instead, no config change is relevant parameters in the Xen config file. Under KVM instead, no config
necessary, but you still need to set up your network interfaces correctly. change is necessary, but you still need to set up your network
interfaces correctly.
By default, under KVM, the "link" parameter you specify per-nic will By default, under KVM, the "link" parameter you specify per-nic will
represent, if non-empty, a different routing table name or number to use for represent, if non-empty, a different routing table name or number to use
your instances. This allows insulation between different instance groups, for your instances. This allows insulation between different instance
and different routing policies between node traffic and instance traffic. groups, and different routing policies between node traffic and instance
traffic.
You will need to configure your routing table basic routes and rules outside You will need to configure your routing table basic routes and rules
of ganeti. The vif scripts will only add /32 routes to your instances, outside of ganeti. The vif scripts will only add /32 routes to your
through their interface, in the table you specified (under KVM, and in the instances, through their interface, in the table you specified (under
main table under Xen). KVM, and in the main table under Xen).
.. admonition:: Bridging under Debian .. admonition:: Bridging under Debian
...@@ -512,8 +517,8 @@ that the hostname used for this must resolve to an IP address reserved ...@@ -512,8 +517,8 @@ that the hostname used for this must resolve to an IP address reserved
**exclusively** for this purpose, and cannot be the name of the first **exclusively** for this purpose, and cannot be the name of the first
(master) node. (master) node.
If you want to use a bridge which is not ``xen-br0``, or no bridge at all, use If you want to use a bridge which is not ``xen-br0``, or no bridge at
the --nicparams all, use ``--nicparams``.
If the bridge name you are using is not ``xen-br0``, use the *-b If the bridge name you are using is not ``xen-br0``, use the *-b
<BRIDGENAME>* option to specify the bridge name. In this case, you <BRIDGENAME>* option to specify the bridge name. In this case, you
......
...@@ -11,61 +11,66 @@ It is divided by functional sections ...@@ -11,61 +11,66 @@ It is divided by functional sections
Opcode Execution Locking Opcode Execution Locking
------------------------ ------------------------
These locks are declared by Logical Units (LUs) (in cmdlib.py) and acquired by These locks are declared by Logical Units (LUs) (in cmdlib.py) and
the Processor (in mcpu.py) with the aid of the Ganeti Locking Library acquired by the Processor (in mcpu.py) with the aid of the Ganeti
(locking.py). They are acquired in the following order: Locking Library (locking.py). They are acquired in the following order:
* BGL: this is the Big Ganeti Lock, it exists for retrocompatibility. New LUs * BGL: this is the Big Ganeti Lock, it exists for retrocompatibility.
acquire it in a shared fashion, and are able to execute all toghether New LUs acquire it in a shared fashion, and are able to execute all
(baring other lock waits) while old LUs acquire it exclusively and can only toghether (baring other lock waits) while old LUs acquire it
execute one at a time, and not at the same time with new LUs. exclusively and can only execute one at a time, and not at the same
* Instance locks: can be declared in ExpandNames() or DeclareLocks() by an LU, time with new LUs.
and have the same name as the instance itself. They are acquired as a set. * Instance locks: can be declared in ExpandNames() or DeclareLocks()
Internally the locking library acquired them in alphabetical order. by an LU, and have the same name as the instance itself. They are
* Node locks: can be declared in ExpandNames() or DeclareLocks() by an LU, and acquired as a set. Internally the locking library acquired them in
have the same name as the node itself. They are acquired as a set. alphabetical order.
Internally the locking library acquired them in alphabetical order. Given * Node locks: can be declared in ExpandNames() or DeclareLocks() by an
this order it's possible to safely acquire a set of instances, and then the LU, and have the same name as the node itself. They are acquired as
nodes they reside on. a set. Internally the locking library acquired them in alphabetical
order. Given this order it's possible to safely acquire a set of
The ConfigWriter (in config.py) is also protected by a SharedLock, which is instances, and then the nodes they reside on.
shared by functions that read the config and acquired exclusively by functions
that modify it. Since the ConfigWriter calls rpc.call_upload_file to all nodes The ConfigWriter (in config.py) is also protected by a SharedLock, which
to distribute the config without holding the node locks, this call must be able is shared by functions that read the config and acquired exclusively by
to execute on the nodes in parallel with other operations (but not necessarily functions that modify it. Since the ConfigWriter calls
concurrently with itself on the same file, as inside the ConfigWriter this is rpc.call_upload_file to all nodes to distribute the config without
called with the internal config lock held. holding the node locks, this call must be able to execute on the nodes
in parallel with other operations (but not necessarily concurrently with
itself on the same file, as inside the ConfigWriter this is called with
the internal config lock held.
Job Queue Locking Job Queue Locking
----------------- -----------------
The job queue is designed to be thread-safe. This means that its public The job queue is designed to be thread-safe. This means that its public
functions can be called from any thread. The job queue can be called from functions can be called from any thread. The job queue can be called
functions called by the queue itself (e.g. logical units), but special from functions called by the queue itself (e.g. logical units), but
attention must be paid not to create deadlocks or an invalid state. special attention must be paid not to create deadlocks or an invalid
state.
The single queue lock is used from all classes involved in the queue handling. The single queue lock is used from all classes involved in the queue
During development we tried to split locks, but deemed it to be too dangerous handling. During development we tried to split locks, but deemed it to
and difficult at the time. Job queue functions acquiring the lock can be safely be too dangerous and difficult at the time. Job queue functions
called from all the rest of the code, as the lock is released before leaving acquiring the lock can be safely called from all the rest of the code,
the job queue again. Unlocked functions should only be called from job queue as the lock is released before leaving the job queue again. Unlocked
related classes (e.g. in jqueue.py) and the lock must be acquired beforehand. functions should only be called from job queue related classes (e.g. in
jqueue.py) and the lock must be acquired beforehand.
In the job queue worker (``_JobQueueWorker``), the lock must be released before In the job queue worker (``_JobQueueWorker``), the lock must be released
calling the LU processor. Otherwise a deadlock can occur when log messages are before calling the LU processor. Otherwise a deadlock can occur when log
added to opcode results. messages are added to opcode results.
Node Daemon Locking Node Daemon Locking
------------------- -------------------
The node daemon contains a lock for the job queue. In order to avoid conflicts The node daemon contains a lock for the job queue. In order to avoid
and/or corruption when an eventual master daemon or another node daemon is conflicts and/or corruption when an eventual master daemon or another
running, it must be held for all job queue operations node daemon is running, it must be held for all job queue operations
There's one special case for the node daemon running on the master node. If There's one special case for the node daemon running on the master node.
grabbing the lock in exclusive fails on startup, the code assumes all checks If grabbing the lock in exclusive fails on startup, the code assumes all
have been done by the process keeping the lock. checks have been done by the process keeping the lock.
.. vim: set textwidth=72 : .. vim: set textwidth=72 :
...@@ -28,7 +28,8 @@ principle. ...@@ -28,7 +28,8 @@ principle.
Generic parameters Generic parameters
------------------ ------------------
A few parameter mean the same thing across all resources which implement it. A few parameter mean the same thing across all resources which implement
it.
``bulk`` ``bulk``
++++++++ ++++++++
...@@ -307,8 +308,8 @@ It supports the following commands: ``GET``. ...@@ -307,8 +308,8 @@ It supports the following commands: ``GET``.
Requests detailed information about the instance. An optional parameter, Requests detailed information about the instance. An optional parameter,
``static`` (bool), can be set to return only static information from the ``static`` (bool), can be set to return only static information from the
configuration without querying the instance's nodes. The result will be a job configuration without querying the instance's nodes. The result will be
id. a job id.
``/2/instances/[instance_name]/reboot`` ``/2/instances/[instance_name]/reboot``
...@@ -385,9 +386,9 @@ It supports the following commands: ``POST``. ...@@ -385,9 +386,9 @@ It supports the following commands: ``POST``.
~~~~~~~~ ~~~~~~~~
Takes the parameters ``mode`` (one of ``replace_on_primary``, Takes the parameters ``mode`` (one of ``replace_on_primary``,
``replace_on_secondary``, ``replace_new_secondary`` or ``replace_auto``), ``replace_on_secondary``, ``replace_new_secondary`` or
``disks`` (comma separated list of disk indexes), ``remote_node`` and ``replace_auto``), ``disks`` (comma separated list of disk indexes),
``iallocator``. ``remote_node`` and ``iallocator``.
``/2/instances/[instance_name]/tags`` ``/2/instances/[instance_name]/tags``
...@@ -586,8 +587,8 @@ Example:: ...@@ -586,8 +587,8 @@ Example::
Change the node role. Change the node role.
The request is a string which should be PUT to this URI. The result will be a The request is a string which should be PUT to this URI. The result will
job id. be a job id.
It supports the ``force`` argument. It supports the ``force`` argument.
...@@ -601,8 +602,8 @@ Manages storage units on the node. ...@@ -601,8 +602,8 @@ Manages storage units on the node.
Requests a list of storage units on a node. Requires the parameters Requests a list of storage units on a node. Requires the parameters
``storage_type`` (one of ``file``, ``lvm-pv`` or ``lvm-vg``) and ``storage_type`` (one of ``file``, ``lvm-pv`` or ``lvm-vg``) and
``output_fields``. The result will be a job id, using which the result can be ``output_fields``. The result will be a job id, using which the result
retrieved. can be retrieved.
``/2/nodes/[node_name]/storage/modify`` ``/2/nodes/[node_name]/storage/modify``
+++++++++++++++++++++++++++++++++++++++ +++++++++++++++++++++++++++++++++++++++
...@@ -612,10 +613,11 @@ Modifies storage units on the node. ...@@ -612,10 +613,11 @@ Modifies storage units on the node.
``PUT`` ``PUT``
~~~~~~~ ~~~~~~~
Modifies parameters of storage units on the node. Requires the parameters Modifies parameters of storage units on the node. Requires the
``storage_type`` (one of ``file``, ``lvm-pv`` or ``lvm-vg``) and ``name`` (name parameters ``storage_type`` (one of ``file``, ``lvm-pv`` or ``lvm-vg``)
of the storage unit). Parameters can be passed additionally. Currently only and ``name`` (name of the storage unit). Parameters can be passed
``allocatable`` (bool) is supported. The result will be a job id. additionally. Currently only ``allocatable`` (bool) is supported. The
result will be a job id.
``/2/nodes/[node_name]/storage/repair`` ``/2/nodes/[node_name]/storage/repair``
+++++++++++++++++++++++++++++++++++++++ +++++++++++++++++++++++++++++++++++++++
...@@ -625,9 +627,9 @@ Repairs a storage unit on the node. ...@@ -625,9 +627,9 @@ Repairs a storage unit on the node.
``PUT`` ``PUT``
~~~~~~~ ~~~~~~~
Repairs a storage unit on the node. Requires the parameters ``storage_type`` Repairs a storage unit on the node. Requires the parameters
(currently only ``lvm-vg`` can be repaired) and ``name`` (name of the storage ``storage_type`` (currently only ``lvm-vg`` can be repaired) and
unit). The result will be a job id. ``name`` (name of the storage unit). The result will be a job id.
``/2/nodes/[node_name]/tags`` ``/2/nodes/[node_name]/tags``
+++++++++++++++++++++++++++++ +++++++++++++++++++++++++++++
......
...@@ -12,8 +12,8 @@ you need to be root to run the cluster commands. ...@@ -12,8 +12,8 @@ you need to be root to run the cluster commands.
Host issues Host issues
----------- -----------
For a host on which the Ganeti software has been installed, but not joined to a For a host on which the Ganeti software has been installed, but not
cluster, there are no changes to the system. joined to a cluster, there are no changes to the system.
For a host that has been joined to the cluster, there are very important For a host that has been joined to the cluster, there are very important
changes: changes:
...@@ -65,11 +65,11 @@ nodes: ...@@ -65,11 +65,11 @@ nodes:
The SSH traffic is protected (after the initial login to a new node) by The SSH traffic is protected (after the initial login to a new node) by
the cluster-wide shared SSH key. the cluster-wide shared SSH key.
RPC communication between the master and nodes is protected using SSL/TLS RPC communication between the master and nodes is protected using
encryption. Both the client and the server must have the cluster-wide SSL/TLS encryption. Both the client and the server must have the
shared SSL/TLS certificate and verify it when establishing the connection cluster-wide shared SSL/TLS certificate and verify it when establishing
by comparing fingerprints. We decided not to use a CA to simplify the the connection by comparing fingerprints. We decided not to use a CA to
key handling. simplify the key handling.
The DRBD traffic is not protected by encryption, as DRBD does not The DRBD traffic is not protected by encryption, as DRBD does not
support this. It's therefore recommended to implement host-level support this. It's therefore recommended to implement host-level
...@@ -83,20 +83,20 @@ nodes when configuring the device. ...@@ -83,20 +83,20 @@ nodes when configuring the device.
Master daemon Master daemon
------------- -------------
The command-line tools to master daemon communication is done via an UNIX The command-line tools to master daemon communication is done via an
socket, whose permissions are reset to ``0600`` after listening but before UNIX socket, whose permissions are reset to ``0600`` after listening but
serving requests. This permission-based protection is documented and works on before serving requests. This permission-based protection is documented
Linux, but is not-portable; however, Ganeti doesn't work on non-Linux system at and works on Linux, but is not-portable; however, Ganeti doesn't work on
the moment. non-Linux system at the moment.
Remote API Remote API
---------- ----------
Starting with Ganeti 2.0, Remote API traffic is encrypted using SSL/TLS by Starting with Ganeti 2.0, Remote API traffic is encrypted using SSL/TLS
default. It supports Basic authentication as per RFC2617. by default. It supports Basic authentication as per RFC2617.
Paths for certificate, private key and CA files required for SSL/TLS will Paths for certificate, private key and CA files required for SSL/TLS
be set at source configure time. Symlinks or command line parameters may will be set at source configure time. Symlinks or command line
be used to use different files. parameters may be used to use different files.
.. vim: set textwidth=72 : .. vim: set textwidth=72 :
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment