From 1cdc9dbb56c31855da394439aba55b914c1d9639 Mon Sep 17 00:00:00 2001 From: Bernardo Dal Seno <bdalseno@google.com> Date: Wed, 30 Nov 2011 18:05:29 +0100 Subject: [PATCH] manpages: Fix small errors in documentation Mostly typos, except for the output of "gnt-instance migrate" in an example, which has been updated to the current version Signed-off-by: Bernardo Dal Seno <bdalseno@google.com> Signed-off-by: Iustin Pop <iustin@google.com> Reviewed-by: Iustin Pop <iustin@google.com> --- doc/admin.rst | 2 +- doc/walkthrough.rst | 12 ++++++------ man/gnt-backup.rst | 4 ++-- man/gnt-instance.rst | 36 ++++++++++++++++++++---------------- 4 files changed, 29 insertions(+), 25 deletions(-) diff --git a/doc/admin.rst b/doc/admin.rst index 92c2362cf..f18b64435 100644 --- a/doc/admin.rst +++ b/doc/admin.rst @@ -591,7 +591,7 @@ For all three cases, the ``replace-disks`` operation can be used:: Since the process involves copying all data from the working node to the target node, it will take a while, depending on the instance's disk -size, node I/O system and network speed. But it is (baring any network +size, node I/O system and network speed. But it is (barring any network interruption) completely transparent for the instance. Re-creating disks for non-redundant instances diff --git a/doc/walkthrough.rst b/doc/walkthrough.rst index a1733578a..071936091 100644 --- a/doc/walkthrough.rst +++ b/doc/walkthrough.rst @@ -104,8 +104,8 @@ And let's check that we have a valid OS:: debootstrap node1# -Running a burnin ----------------- +Running a burn-in +----------------- Now that the cluster is created, it is time to check that the hardware works correctly, that the hypervisor can actually create instances, @@ -263,8 +263,8 @@ guide. Similar output lines are replaced with ``β¦`` in the below log:: β¦ node1# -You can see in the above what operations the burnin does. Ideally, the -burnin log would proceed successfully through all the steps and end +You can see in the above what operations the burn-in does. Ideally, the +burn-in log would proceed successfully through all the steps and end cleanly, without throwing errors. Instance operations @@ -584,7 +584,7 @@ reused. Re-adding it is simple:: Mon Oct 26 05:27:39 2009 - INFO: Readding a node, the offline/drained flags were reset Mon Oct 26 05:27:39 2009 - INFO: Node will be a master candidate -And is now working again:: +And it is now working again:: node1# gnt-node list Node DTotal DFree MTotal MNode MFree Pinst Sinst @@ -592,7 +592,7 @@ And is now working again:: node2 1.3T 1.3T 32.0G 1.0G 30.4G 1 3 node3 1.3T 1.3T 32.0G 1.0G 30.4G 0 0 -.. note:: If you have the Ganeti has been built with the htools +.. note:: If Ganeti has been built with the htools component enabled, you can shuffle the instances around to have a better use of the nodes. diff --git a/man/gnt-backup.rst b/man/gnt-backup.rst index 7956af042..0e0077a46 100644 --- a/man/gnt-backup.rst +++ b/man/gnt-backup.rst @@ -138,11 +138,11 @@ inherited from the export. Possible parameters are: maxmem the maximum memory size of the instance; as usual, suffixes can be - used to denote the unit, otherwise the value is taken in mebibites + used to denote the unit, otherwise the value is taken in mebibytes minmem the minimum memory size of the instance; as usual, suffixes can be - used to denote the unit, otherwise the value is taken in mebibites + used to denote the unit, otherwise the value is taken in mebibytes vcpus the number of VCPUs to assign to the instance (if this value makes diff --git a/man/gnt-instance.rst b/man/gnt-instance.rst index 5801e5763..dcee0d448 100644 --- a/man/gnt-instance.rst +++ b/man/gnt-instance.rst @@ -61,7 +61,7 @@ reuse those volumes (instead of creating new ones) as the instance's disks. Ganeti will rename these volumes to the standard format, and (without installing the OS) will use them as-is for the instance. This allows migrating instances from non-managed mode -(e.q. plain KVM with LVM) to being managed via Ganeti. Note that +(e.g. plain KVM with LVM) to being managed via Ganeti. Please note that this works only for the \`plain' disk template (see below for template details). @@ -130,11 +130,11 @@ values are inherited from the cluster. Possible parameters are: maxmem the maximum memory size of the instance; as usual, suffixes can be - used to denote the unit, otherwise the value is taken in mebibites + used to denote the unit, otherwise the value is taken in mebibytes minmem the minimum memory size of the instance; as usual, suffixes can be - used to denote the unit, otherwise the value is taken in mebibites + used to denote the unit, otherwise the value is taken in mebibytes vcpus the number of VCPUs to assign to the instance (if this value makes @@ -180,7 +180,7 @@ boot\_order n network boot (PXE) - The default is not to set an HVM boot order which is interpreted + The default is not to set an HVM boot order, which is interpreted as 'dc'. For KVM the boot order is either "floppy", "cdrom", "disk" or @@ -1444,27 +1444,31 @@ ignored. The option ``-f`` will skip the prompting for confirmation. If ``--allow-failover`` is specified it tries to fallback to failover if -it already can determine that a migration wont work (i.e. if the -instance is shutdown). Please note that the fallback will not happen +it already can determine that a migration won't work (i.e. if the +instance is shut down). Please note that the fallback will not happen during execution. If a migration fails during execution it still fails. Example (and expected output):: # gnt-instance migrate instance1 - Migrate will happen to the instance instance1. Note that migration is - **experimental** in this version. This might impact the instance if - anything goes wrong. Continue? + Instance instance1 will be migrated. Note that migration + might impact the instance if anything goes wrong (e.g. due to bugs in + the hypervisor). Continue? y/[n]/?: y + Migrating instance instance1.example.com * checking disk consistency between source and target - * ensuring the target is in secondary mode + * switching node node2.example.com to secondary mode + * changing into standalone mode * changing disks into dual-master mode - - INFO: Waiting for instance instance1 to sync disks. - - INFO: Instance instance1's disks are in sync. + * wait until resync is done + * preparing node2.example.com to accept the instance * migrating instance to node2.example.com - * changing the instance's disks on source node to secondary - - INFO: Waiting for instance instance1 to sync disks. - - INFO: Instance instance1's disks are in sync. - * changing the instance's disks to single-master + * switching node node1.example.com to secondary mode + * wait until resync is done + * changing into standalone mode + * changing disks into single-master mode + * wait until resync is done + * done # -- GitLab