diff --git a/COPYING b/COPYING
new file mode 100644
index 0000000000000000000000000000000000000000..623b6258a134210f0b0ada106fdaab7f0370d9c5
--- /dev/null
+++ b/COPYING
@@ -0,0 +1,340 @@
+		    GNU GENERAL PUBLIC LICENSE
+		       Version 2, June 1991
+
+ Copyright (C) 1989, 1991 Free Software Foundation, Inc.
+     51 Franklin Street, Fifth Floor, Boston, MA  02110-1301  USA
+ Everyone is permitted to copy and distribute verbatim copies
+ of this license document, but changing it is not allowed.
+
+			    Preamble
+
+  The licenses for most software are designed to take away your
+freedom to share and change it.  By contrast, the GNU General Public
+License is intended to guarantee your freedom to share and change free
+software--to make sure the software is free for all its users.  This
+General Public License applies to most of the Free Software
+Foundation's software and to any other program whose authors commit to
+using it.  (Some other Free Software Foundation software is covered by
+the GNU Library General Public License instead.)  You can apply it to
+your programs, too.
+
+  When we speak of free software, we are referring to freedom, not
+price.  Our General Public Licenses are designed to make sure that you
+have the freedom to distribute copies of free software (and charge for
+this service if you wish), that you receive source code or can get it
+if you want it, that you can change the software or use pieces of it
+in new free programs; and that you know you can do these things.
+
+  To protect your rights, we need to make restrictions that forbid
+anyone to deny you these rights or to ask you to surrender the rights.
+These restrictions translate to certain responsibilities for you if you
+distribute copies of the software, or if you modify it.
+
+  For example, if you distribute copies of such a program, whether
+gratis or for a fee, you must give the recipients all the rights that
+you have.  You must make sure that they, too, receive or can get the
+source code.  And you must show them these terms so they know their
+rights.
+
+  We protect your rights with two steps: (1) copyright the software, and
+(2) offer you this license which gives you legal permission to copy,
+distribute and/or modify the software.
+
+  Also, for each author's protection and ours, we want to make certain
+that everyone understands that there is no warranty for this free
+software.  If the software is modified by someone else and passed on, we
+want its recipients to know that what they have is not the original, so
+that any problems introduced by others will not reflect on the original
+authors' reputations.
+
+  Finally, any free program is threatened constantly by software
+patents.  We wish to avoid the danger that redistributors of a free
+program will individually obtain patent licenses, in effect making the
+program proprietary.  To prevent this, we have made it clear that any
+patent must be licensed for everyone's free use or not licensed at all.
+
+  The precise terms and conditions for copying, distribution and
+modification follow.
+
+		    GNU GENERAL PUBLIC LICENSE
+   TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
+
+  0. This License applies to any program or other work which contains
+a notice placed by the copyright holder saying it may be distributed
+under the terms of this General Public License.  The "Program", below,
+refers to any such program or work, and a "work based on the Program"
+means either the Program or any derivative work under copyright law:
+that is to say, a work containing the Program or a portion of it,
+either verbatim or with modifications and/or translated into another
+language.  (Hereinafter, translation is included without limitation in
+the term "modification".)  Each licensee is addressed as "you".
+
+Activities other than copying, distribution and modification are not
+covered by this License; they are outside its scope.  The act of
+running the Program is not restricted, and the output from the Program
+is covered only if its contents constitute a work based on the
+Program (independent of having been made by running the Program).
+Whether that is true depends on what the Program does.
+
+  1. You may copy and distribute verbatim copies of the Program's
+source code as you receive it, in any medium, provided that you
+conspicuously and appropriately publish on each copy an appropriate
+copyright notice and disclaimer of warranty; keep intact all the
+notices that refer to this License and to the absence of any warranty;
+and give any other recipients of the Program a copy of this License
+along with the Program.
+
+You may charge a fee for the physical act of transferring a copy, and
+you may at your option offer warranty protection in exchange for a fee.
+
+  2. You may modify your copy or copies of the Program or any portion
+of it, thus forming a work based on the Program, and copy and
+distribute such modifications or work under the terms of Section 1
+above, provided that you also meet all of these conditions:
+
+    a) You must cause the modified files to carry prominent notices
+    stating that you changed the files and the date of any change.
+
+    b) You must cause any work that you distribute or publish, that in
+    whole or in part contains or is derived from the Program or any
+    part thereof, to be licensed as a whole at no charge to all third
+    parties under the terms of this License.
+
+    c) If the modified program normally reads commands interactively
+    when run, you must cause it, when started running for such
+    interactive use in the most ordinary way, to print or display an
+    announcement including an appropriate copyright notice and a
+    notice that there is no warranty (or else, saying that you provide
+    a warranty) and that users may redistribute the program under
+    these conditions, and telling the user how to view a copy of this
+    License.  (Exception: if the Program itself is interactive but
+    does not normally print such an announcement, your work based on
+    the Program is not required to print an announcement.)
+
+These requirements apply to the modified work as a whole.  If
+identifiable sections of that work are not derived from the Program,
+and can be reasonably considered independent and separate works in
+themselves, then this License, and its terms, do not apply to those
+sections when you distribute them as separate works.  But when you
+distribute the same sections as part of a whole which is a work based
+on the Program, the distribution of the whole must be on the terms of
+this License, whose permissions for other licensees extend to the
+entire whole, and thus to each and every part regardless of who wrote it.
+
+Thus, it is not the intent of this section to claim rights or contest
+your rights to work written entirely by you; rather, the intent is to
+exercise the right to control the distribution of derivative or
+collective works based on the Program.
+
+In addition, mere aggregation of another work not based on the Program
+with the Program (or with a work based on the Program) on a volume of
+a storage or distribution medium does not bring the other work under
+the scope of this License.
+
+  3. You may copy and distribute the Program (or a work based on it,
+under Section 2) in object code or executable form under the terms of
+Sections 1 and 2 above provided that you also do one of the following:
+
+    a) Accompany it with the complete corresponding machine-readable
+    source code, which must be distributed under the terms of Sections
+    1 and 2 above on a medium customarily used for software interchange; or,
+
+    b) Accompany it with a written offer, valid for at least three
+    years, to give any third party, for a charge no more than your
+    cost of physically performing source distribution, a complete
+    machine-readable copy of the corresponding source code, to be
+    distributed under the terms of Sections 1 and 2 above on a medium
+    customarily used for software interchange; or,
+
+    c) Accompany it with the information you received as to the offer
+    to distribute corresponding source code.  (This alternative is
+    allowed only for noncommercial distribution and only if you
+    received the program in object code or executable form with such
+    an offer, in accord with Subsection b above.)
+
+The source code for a work means the preferred form of the work for
+making modifications to it.  For an executable work, complete source
+code means all the source code for all modules it contains, plus any
+associated interface definition files, plus the scripts used to
+control compilation and installation of the executable.  However, as a
+special exception, the source code distributed need not include
+anything that is normally distributed (in either source or binary
+form) with the major components (compiler, kernel, and so on) of the
+operating system on which the executable runs, unless that component
+itself accompanies the executable.
+
+If distribution of executable or object code is made by offering
+access to copy from a designated place, then offering equivalent
+access to copy the source code from the same place counts as
+distribution of the source code, even though third parties are not
+compelled to copy the source along with the object code.
+
+  4. You may not copy, modify, sublicense, or distribute the Program
+except as expressly provided under this License.  Any attempt
+otherwise to copy, modify, sublicense or distribute the Program is
+void, and will automatically terminate your rights under this License.
+However, parties who have received copies, or rights, from you under
+this License will not have their licenses terminated so long as such
+parties remain in full compliance.
+
+  5. You are not required to accept this License, since you have not
+signed it.  However, nothing else grants you permission to modify or
+distribute the Program or its derivative works.  These actions are
+prohibited by law if you do not accept this License.  Therefore, by
+modifying or distributing the Program (or any work based on the
+Program), you indicate your acceptance of this License to do so, and
+all its terms and conditions for copying, distributing or modifying
+the Program or works based on it.
+
+  6. Each time you redistribute the Program (or any work based on the
+Program), the recipient automatically receives a license from the
+original licensor to copy, distribute or modify the Program subject to
+these terms and conditions.  You may not impose any further
+restrictions on the recipients' exercise of the rights granted herein.
+You are not responsible for enforcing compliance by third parties to
+this License.
+
+  7. If, as a consequence of a court judgment or allegation of patent
+infringement or for any other reason (not limited to patent issues),
+conditions are imposed on you (whether by court order, agreement or
+otherwise) that contradict the conditions of this License, they do not
+excuse you from the conditions of this License.  If you cannot
+distribute so as to satisfy simultaneously your obligations under this
+License and any other pertinent obligations, then as a consequence you
+may not distribute the Program at all.  For example, if a patent
+license would not permit royalty-free redistribution of the Program by
+all those who receive copies directly or indirectly through you, then
+the only way you could satisfy both it and this License would be to
+refrain entirely from distribution of the Program.
+
+If any portion of this section is held invalid or unenforceable under
+any particular circumstance, the balance of the section is intended to
+apply and the section as a whole is intended to apply in other
+circumstances.
+
+It is not the purpose of this section to induce you to infringe any
+patents or other property right claims or to contest validity of any
+such claims; this section has the sole purpose of protecting the
+integrity of the free software distribution system, which is
+implemented by public license practices.  Many people have made
+generous contributions to the wide range of software distributed
+through that system in reliance on consistent application of that
+system; it is up to the author/donor to decide if he or she is willing
+to distribute software through any other system and a licensee cannot
+impose that choice.
+
+This section is intended to make thoroughly clear what is believed to
+be a consequence of the rest of this License.
+
+  8. If the distribution and/or use of the Program is restricted in
+certain countries either by patents or by copyrighted interfaces, the
+original copyright holder who places the Program under this License
+may add an explicit geographical distribution limitation excluding
+those countries, so that distribution is permitted only in or among
+countries not thus excluded.  In such case, this License incorporates
+the limitation as if written in the body of this License.
+
+  9. The Free Software Foundation may publish revised and/or new versions
+of the General Public License from time to time.  Such new versions will
+be similar in spirit to the present version, but may differ in detail to
+address new problems or concerns.
+
+Each version is given a distinguishing version number.  If the Program
+specifies a version number of this License which applies to it and "any
+later version", you have the option of following the terms and conditions
+either of that version or of any later version published by the Free
+Software Foundation.  If the Program does not specify a version number of
+this License, you may choose any version ever published by the Free Software
+Foundation.
+
+  10. If you wish to incorporate parts of the Program into other free
+programs whose distribution conditions are different, write to the author
+to ask for permission.  For software which is copyrighted by the Free
+Software Foundation, write to the Free Software Foundation; we sometimes
+make exceptions for this.  Our decision will be guided by the two goals
+of preserving the free status of all derivatives of our free software and
+of promoting the sharing and reuse of software generally.
+
+			    NO WARRANTY
+
+  11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY
+FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW.  EXCEPT WHEN
+OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES
+PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED
+OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
+MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE.  THE ENTIRE RISK AS
+TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU.  SHOULD THE
+PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING,
+REPAIR OR CORRECTION.
+
+  12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
+WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR
+REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES,
+INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING
+OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED
+TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY
+YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER
+PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE
+POSSIBILITY OF SUCH DAMAGES.
+
+		     END OF TERMS AND CONDITIONS
+
+	    How to Apply These Terms to Your New Programs
+
+  If you develop a new program, and you want it to be of the greatest
+possible use to the public, the best way to achieve this is to make it
+free software which everyone can redistribute and change under these terms.
+
+  To do so, attach the following notices to the program.  It is safest
+to attach them to the start of each source file to most effectively
+convey the exclusion of warranty; and each file should have at least
+the "copyright" line and a pointer to where the full notice is found.
+
+    <one line to give the program's name and a brief idea of what it does.>
+    Copyright (C) <year>  <name of author>
+
+    This program is free software; you can redistribute it and/or modify
+    it under the terms of the GNU General Public License as published by
+    the Free Software Foundation; either version 2 of the License, or
+    (at your option) any later version.
+
+    This program is distributed in the hope that it will be useful,
+    but WITHOUT ANY WARRANTY; without even the implied warranty of
+    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+    GNU General Public License for more details.
+
+    You should have received a copy of the GNU General Public License
+    along with this program; if not, write to the Free Software
+    Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301  USA
+
+
+Also add information on how to contact you by electronic and paper mail.
+
+If the program is interactive, make it output a short notice like this
+when it starts in an interactive mode:
+
+    Gnomovision version 69, Copyright (C) year  name of author
+    Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
+    This is free software, and you are welcome to redistribute it
+    under certain conditions; type `show c' for details.
+
+The hypothetical commands `show w' and `show c' should show the appropriate
+parts of the General Public License.  Of course, the commands you use may
+be called something other than `show w' and `show c'; they could even be
+mouse-clicks or menu items--whatever suits your program.
+
+You should also get your employer (if you work as a programmer) or your
+school, if any, to sign a "copyright disclaimer" for the program, if
+necessary.  Here is a sample; alter the names:
+
+  Yoyodyne, Inc., hereby disclaims all copyright interest in the program
+  `Gnomovision' (which makes passes at compilers) written by James Hacker.
+
+  <signature of Ty Coon>, 1 April 1989
+  Ty Coon, President of Vice
+
+This General Public License does not permit incorporating your program into
+proprietary programs.  If your program is a subroutine library, you may
+consider it more useful to permit linking proprietary applications with the
+library.  If this is what you want to do, use the GNU Library General
+Public License instead of this License.
diff --git a/INSTALL b/INSTALL
new file mode 100644
index 0000000000000000000000000000000000000000..a2b85e6b60e862722e546c749878113e492c2095
--- /dev/null
+++ b/INSTALL
@@ -0,0 +1,28 @@
+Installation of the software
+============================
+
+Before installing, please verify that you have the following programs:
+  - lvm 2
+  - ssh
+  - fping
+  - python twisted library (the core is enough)
+  - python openssl bindings
+
+To install, simply do ./configure && make && make install
+
+This will install the software under /usr/local. You then need to copy
+ganeti.init to /etc/init.d and integrate it into your boot sequence
+(``chkconfig``, ``update-rc.d``, etc.).
+
+Cluster initialisation
+======================
+
+Before initialising the cluster, on each node you need to create the following
+directories:
+
+  - /etc/ganeti
+  - /var/log/ganeti
+  - /var/lib/ganeti
+  - /srv/ganeti and /srv/ganeti/os
+
+After this, use ``gnt-cluster init``.
diff --git a/Makefile.am b/Makefile.am
new file mode 100644
index 0000000000000000000000000000000000000000..754e14273fa4df17edc66fe3e3f303fc26ade434
--- /dev/null
+++ b/Makefile.am
@@ -0,0 +1,16 @@
+# standard automake rules
+
+SUBDIRS = man lib scripts daemons docs testing tools
+EXTRA_DIST = ganeti.initd
+
+# custom rules
+depgraph: depgraph.png
+
+depgraph.png: depgraph.dot
+	dot -Tpng -o $@ $<
+
+depgraph.ps: depgraph.dot
+	dot -Tps -o $@ $<
+
+depgraph.dot: ganeti/*.py
+	pylint.python2.4 --indent-string '  ' --rcfile=/dev/null --reports y --int-import-graph $@ --persistent n ganeti >/dev/null
diff --git a/README b/README
new file mode 100644
index 0000000000000000000000000000000000000000..c3bba3d49dd7449914dee250d687ab5f1947e600
--- /dev/null
+++ b/README
@@ -0,0 +1,7 @@
+Ganeti 1.2
+==========
+
+For installation instructions, read the INSTALL file.
+
+For a brief introduction, read the ganeti(7) manpage and the other pages
+it suggests.
diff --git a/configure.ac b/configure.ac
new file mode 100644
index 0000000000000000000000000000000000000000..eb82373bca4b75dbe0d200d4bdedc11bf9465fdb
--- /dev/null
+++ b/configure.ac
@@ -0,0 +1,25 @@
+#                                               -*- Autoconf -*-
+# Process this file with autoconf to produce a configure script.
+
+AC_PREREQ(2.59)
+AC_INIT(ganeti, 1.2a, ganeti@googlegroups.com)
+AM_INIT_AUTOMAKE(foreign)
+
+# Checks for programs.
+AC_PROG_INSTALL
+
+# Checks for python
+AM_PATH_PYTHON(2.4)
+
+# Checks for libraries.
+
+# Checks for header files.
+
+# Checks for typedefs, structures, and compiler characteristics.
+
+# Checks for library functions.
+
+AC_CONFIG_FILES([Makefile man/Makefile docs/Makefile 
+		testing/Makefile tools/Makefile
+		lib/Makefile scripts/Makefile daemons/Makefile])
+AC_OUTPUT
diff --git a/daemons/Makefile.am b/daemons/Makefile.am
new file mode 100644
index 0000000000000000000000000000000000000000..82baa7db4db34c8d039db87234f2fe85ee766699
--- /dev/null
+++ b/daemons/Makefile.am
@@ -0,0 +1 @@
+dist_sbin_SCRIPTS = ganeti-noded ganeti-watcher
diff --git a/daemons/ganeti-noded b/daemons/ganeti-noded
new file mode 100755
index 0000000000000000000000000000000000000000..de1f43896bcacc3cc995f801ae80aacb6be9c3be
--- /dev/null
+++ b/daemons/ganeti-noded
@@ -0,0 +1,401 @@
+#!/usr/bin/python
+#
+
+# Copyright (C) 2006, 2007 Google Inc.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful, but
+# WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+# General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
+# 02110-1301, USA.
+
+
+"""Ganeti node daemon"""
+
+import os
+import sys
+import resource
+import traceback
+
+from optparse import OptionParser
+
+
+from ganeti import backend
+from ganeti import logger
+from ganeti import constants
+from ganeti import objects
+from ganeti import errors
+from ganeti import ssconf
+
+from twisted.spread import pb
+from twisted.internet import reactor
+from twisted.cred import checkers, portal
+from OpenSSL import SSL
+
+
+class ServerContextFactory:
+  def getContext(self):
+    ctx = SSL.Context(SSL.TLSv1_METHOD)
+    ctx.use_certificate_file(constants.SSL_CERT_FILE)
+    ctx.use_privatekey_file(constants.SSL_CERT_FILE)
+    return ctx
+
+class ServerObject(pb.Avatar):
+  def __init__(self, name):
+    self.name = name
+
+  def perspectiveMessageReceived(self, broker, message, args, kw):
+    """This method is called when a network message is received.
+
+    I will call::
+
+      |  self.perspective_%(message)s(*broker.unserialize(args),
+      |                               **broker.unserialize(kw))
+
+    to handle the method; subclasses of Avatar are expected to
+    implement methods of this naming convention.
+    """
+
+    args = broker.unserialize(args, self)
+    kw = broker.unserialize(kw, self)
+    method = getattr(self, "perspective_%s" % message)
+    tb = None
+    state = None
+    try:
+      state = method(*args, **kw)
+    except:
+      tb = traceback.format_exc()
+
+    return broker.serialize((tb, state), self, method, args, kw)
+
+  # the new block devices  --------------------------
+
+  def perspective_blockdev_create(self,params):
+    bdev_s, size, on_primary = params
+    bdev = objects.ConfigObject.Loads(bdev_s)
+    if bdev is None:
+      raise ValueError("can't unserialize data!")
+    return backend.CreateBlockDevice(bdev, size, on_primary)
+
+
+  def perspective_blockdev_remove(self,params):
+    bdev_s = params[0]
+    bdev = objects.ConfigObject.Loads(bdev_s)
+    return backend.RemoveBlockDevice(bdev)
+
+
+  def perspective_blockdev_assemble(self,params):
+    bdev_s, on_primary = params
+    bdev = objects.ConfigObject.Loads(bdev_s)
+    if bdev is None:
+      raise ValueError("can't unserialize data!")
+    return backend.AssembleBlockDevice(bdev, on_primary)
+
+
+  def perspective_blockdev_shutdown(self,params):
+    bdev_s = params[0]
+    bdev = objects.ConfigObject.Loads(bdev_s)
+    if bdev is None:
+      raise ValueError("can't unserialize data!")
+    return backend.ShutdownBlockDevice(bdev)
+
+
+  def perspective_blockdev_addchild(self,params):
+    bdev_s, ndev_s = params
+    bdev = objects.ConfigObject.Loads(bdev_s)
+    ndev = objects.ConfigObject.Loads(ndev_s)
+    if bdev is None or ndev is None:
+      raise ValueError("can't unserialize data!")
+    return backend.MirrorAddChild(bdev, ndev)
+
+
+  def perspective_blockdev_removechild(self,params):
+    bdev_s, ndev_s = params
+    bdev = objects.ConfigObject.Loads(bdev_s)
+    ndev = objects.ConfigObject.Loads(ndev_s)
+    if bdev is None or ndev is None:
+      raise ValueError("can't unserialize data!")
+    return backend.MirrorRemoveChild(bdev, ndev)
+
+  def perspective_blockdev_getmirrorstatus(self, params):
+    disks = [objects.ConfigObject.Loads(dsk_s)
+            for dsk_s in params]
+    return backend.GetMirrorStatus(disks)
+
+  def perspective_blockdev_find(self, params):
+    disk = objects.ConfigObject.Loads(params[0])
+    return backend.FindBlockDevice(disk)
+
+  def perspective_blockdev_snapshot(self,params):
+    cfbd = objects.ConfigObject.Loads(params[0])
+    return backend.SnapshotBlockDevice(cfbd)
+
+  # export/import  --------------------------
+
+  def perspective_snapshot_export(self,params):
+    disk = objects.ConfigObject.Loads(params[0])
+    dest_node = params[1]
+    instance = objects.ConfigObject.Loads(params[2])
+    return backend.ExportSnapshot(disk,dest_node,instance)
+
+  def perspective_finalize_export(self,params):
+    instance = objects.ConfigObject.Loads(params[0])
+    snap_disks = [objects.ConfigObject.Loads(str_data)
+                  for str_data in params[1]]
+    return backend.FinalizeExport(instance, snap_disks)
+
+  def perspective_export_info(self,params):
+    dir = params[0]
+    einfo = backend.ExportInfo(dir)
+    if einfo is None:
+      return einfo
+    return einfo.Dumps()
+
+  def perspective_export_list(self, params):
+    return backend.ListExports()
+
+  def perspective_export_remove(self, params):
+    export = params[0]
+    return backend.RemoveExport(export)
+
+  # volume  --------------------------
+
+  def perspective_volume_list(self,params):
+    vgname = params[0]
+    return backend.GetVolumeList(vgname)
+
+  def perspective_vg_list(self,params):
+    return backend.ListVolumeGroups()
+
+  # bridge  --------------------------
+
+  def perspective_bridges_exist(self,params):
+    bridges_list = params[0]
+    return backend.BridgesExist(bridges_list)
+
+  # instance  --------------------------
+
+  def perspective_instance_os_add(self,params):
+    inst_s, os_disk, swap_disk = params
+    inst = objects.ConfigObject.Loads(inst_s)
+    return backend.AddOSToInstance(inst, os_disk, swap_disk)
+
+  def perspective_instance_os_import(self, params):
+    inst_s, os_disk, swap_disk, src_node, src_image = params
+    inst = objects.ConfigObject.Loads(inst_s)
+    return backend.ImportOSIntoInstance(inst, os_disk, swap_disk,
+                                        src_node, src_image)
+
+  def perspective_instance_shutdown(self,params):
+    instance = objects.ConfigObject.Loads(params[0])
+    return backend.ShutdownInstance(instance)
+
+  def perspective_instance_start(self,params):
+    instance = objects.ConfigObject.Loads(params[0])
+    extra_args = params[1]
+    return backend.StartInstance(instance, extra_args)
+
+  def perspective_instance_info(self,params):
+    return backend.GetInstanceInfo(params[0])
+
+  def perspective_all_instances_info(self,params):
+    return backend.GetAllInstancesInfo()
+
+  def perspective_instance_list(self,params):
+    return backend.GetInstanceList()
+
+  # node --------------------------
+
+  def perspective_node_info(self,params):
+    vgname = params[0]
+    return backend.GetNodeInfo(vgname)
+
+  def perspective_node_add(self,params):
+    return backend.AddNode(params[0], params[1], params[2],
+                           params[3], params[4], params[5])
+
+  def perspective_node_verify(self,params):
+    return backend.VerifyNode(params[0])
+
+  def perspective_node_start_master(self, params):
+    return backend.StartMaster()
+
+  def perspective_node_stop_master(self, params):
+    return backend.StopMaster()
+
+  def perspective_node_leave_cluster(self, params):
+    return backend.LeaveCluster()
+
+  # cluster --------------------------
+
+  def perspective_version(self,params):
+    return constants.PROTOCOL_VERSION
+
+  def perspective_configfile_list(self,params):
+    return backend.ListConfigFiles()
+
+  def perspective_upload_file(self,params):
+    return backend.UploadFile(*params)
+
+
+  # os -----------------------
+
+  def perspective_os_diagnose(self, params):
+    os_list = backend.DiagnoseOS()
+    if not os_list:
+      # this catches also return values of 'False',
+      # for which we can't iterate over
+      return os_list
+    result = []
+    for data in os_list:
+      if isinstance(data, objects.OS):
+        result.append(data.Dumps())
+      elif isinstance(data, errors.InvalidOS):
+        result.append(data.args)
+      else:
+        raise errors.ProgrammerError, ("Invalid result from backend.DiagnoseOS"
+                                       " (class %s, %s)" %
+                                       (str(data.__class__), data))
+
+    return result
+
+  def perspective_os_get(self, params):
+    name = params[0]
+    try:
+      os = backend.OSFromDisk(name).Dumps()
+    except errors.InvalidOS, err:
+      os = err.args
+    return os
+
+  # hooks -----------------------
+
+  def perspective_hooks_runner(self, params):
+    hpath, phase, env = params
+    hr = backend.HooksRunner()
+    return hr.RunHooks(hpath, phase, env)
+
+
+class MyRealm:
+  __implements__ = portal.IRealm
+  def requestAvatar(self, avatarId, mind, *interfaces):
+    if pb.IPerspective not in interfaces:
+      raise NotImplementedError
+    return pb.IPerspective, ServerObject(avatarId), lambda:None
+
+
+def ParseOptions():
+  """Parse the command line options.
+
+  Returns:
+    (options, args) as from OptionParser.parse_args()
+
+  """
+  parser = OptionParser(description="Ganeti node daemon",
+                        usage="%prog [-f] [-d]",
+                        version="%%prog (ganeti) %s" %
+                        constants.RELEASE_VERSION)
+
+  parser.add_option("-f", "--foreground", dest="fork",
+                    help="Don't detach from the current terminal",
+                    default=True, action="store_false")
+  parser.add_option("-d", "--debug", dest="debug",
+                    help="Enable some debug messages",
+                    default=False, action="store_true")
+  options, args = parser.parse_args()
+  return options, args
+
+
+def main():
+  options, args = ParseOptions()
+  for fname in (constants.SSL_CERT_FILE,):
+    if not os.path.isfile(fname):
+      print "config %s not there, will not run." % fname
+      sys.exit(5)
+
+  try:
+    ss = ssconf.SimpleStore()
+    port = ss.GetNodeDaemonPort()
+    pwdata = ss.GetNodeDaemonPassword()
+  except errors.ConfigurationError, err:
+    print "Cluster configuration incomplete: '%s'" % str(err)
+    sys.exit(5)
+
+  # become a daemon
+  if options.fork:
+    createDaemon()
+
+  logger.SetupLogging(twisted_workaround=True, debug=options.debug,
+                      program="ganeti-noded")
+
+  p = portal.Portal(MyRealm())
+  p.registerChecker(
+    checkers.InMemoryUsernamePasswordDatabaseDontUse(master_node=pwdata))
+  reactor.listenSSL(port, pb.PBServerFactory(p), ServerContextFactory())
+  reactor.run()
+
+
+def createDaemon():
+  """Detach a process from the controlling terminal and run it in the
+  background as a daemon.
+  """
+  UMASK = 077
+  WORKDIR = "/"
+  # Default maximum for the number of available file descriptors.
+  if 'SC_OPEN_MAX' in os.sysconf_names:
+    try:
+      MAXFD = os.sysconf('SC_OPEN_MAX')
+      if MAXFD < 0:
+        MAXFD = 1024
+    except OSError:
+      MAXFD = 1024
+  else:
+    MAXFD = 1024
+  # The standard I/O file descriptors are redirected to /dev/null by default.
+  #REDIRECT_TO = getattr(os, "devnull", "/dev/null")
+  REDIRECT_TO = constants.LOG_NODESERVER
+  try:
+    pid = os.fork()
+  except OSError, e:
+    raise Exception, "%s [%d]" % (e.strerror, e.errno)
+  if (pid == 0):	# The first child.
+    os.setsid()
+    try:
+      pid = os.fork()	# Fork a second child.
+    except OSError, e:
+      raise Exception, "%s [%d]" % (e.strerror, e.errno)
+    if (pid == 0):	# The second child.
+      os.chdir(WORKDIR)
+      os.umask(UMASK)
+    else:
+      # exit() or _exit()?  See below.
+      os._exit(0)	# Exit parent (the first child) of the second child.
+  else:
+    os._exit(0)	# Exit parent of the first child.
+  maxfd = resource.getrlimit(resource.RLIMIT_NOFILE)[1]
+  if (maxfd == resource.RLIM_INFINITY):
+    maxfd = MAXFD
+
+  # Iterate through and close all file descriptors.
+  for fd in range(0, maxfd):
+    try:
+      os.close(fd)
+    except OSError:	# ERROR, fd wasn't open to begin with (ignored)
+      pass
+  os.open(REDIRECT_TO, os.O_RDWR|os.O_CREAT|os.O_APPEND) # standard input (0)
+  # Duplicate standard input to standard output and standard error.
+  os.dup2(0, 1)			# standard output (1)
+  os.dup2(0, 2)			# standard error (2)
+  return(0)
+
+
+if __name__=='__main__':
+  main()
diff --git a/daemons/ganeti-watcher b/daemons/ganeti-watcher
new file mode 100755
index 0000000000000000000000000000000000000000..39250e5ecd15d14a3ebcfaa0a639c2ac03b5329f
--- /dev/null
+++ b/daemons/ganeti-watcher
@@ -0,0 +1,333 @@
+#!/usr/bin/python
+#
+
+# Copyright (C) 2006, 2007 Google Inc.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful, but
+# WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+# General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
+# 02110-1301, USA.
+
+
+"""Tool to restart erronously downed virtual machines.
+
+This program and set of classes implement a watchdog to restart
+virtual machines in a Ganeti cluster that have crashed or been killed
+by a node reboot.  Run from cron or similar.
+"""
+
+
+LOGFILE = '/var/log/ganeti/watcher.log'
+MAXTRIES = 5
+BAD_STATES = ['stopped']
+HELPLESS_STATES = ['(node down)']
+NOTICE = 'NOTICE'
+ERROR = 'ERROR'
+
+import os
+import sys
+import time
+import fcntl
+import errno
+from optparse import OptionParser
+
+
+from ganeti import utils
+from ganeti import constants
+
+
+class Error(Exception):
+  """Generic custom error class."""
+  pass
+
+
+def Indent(s, prefix='| '):
+  """Indent a piece of text with a given prefix before each line.
+
+  Args:
+    s: The string to indent
+    prefix: The string to prepend each line.
+  """
+  return "%s%s\n" % (prefix, ('\n' + prefix).join(s.splitlines()))
+
+
+def DoCmd(cmd):
+  """Run a shell command.
+
+  Args:
+    cmd: the command to run.
+
+  Raises CommandError with verbose commentary on error.
+  """
+  res = utils.RunCmd(cmd)
+
+  if res.failed:
+    raise Error("Command %s failed:\n%s\nstdout:\n%sstderr:\n%s" %
+                (repr(cmd),
+                 Indent(res.fail_reason),
+                 Indent(res.stdout),
+                 Indent(res.stderr)))
+
+  return res
+
+
+class RestarterState(object):
+  """Interface to a state file recording restart attempts.
+
+  Methods:
+    Open(): open, lock, read and parse the file.
+            Raises StandardError on lock contention.
+
+    NumberOfAttempts(name): returns the number of times in succession
+                            a restart has been attempted of the named instance.
+
+    RecordAttempt(name, when): records one restart attempt of name at
+                               time in when.
+
+    Remove(name): remove record given by name, if exists.
+
+    Save(name): saves all records to file, releases lock and closes file.
+  """
+  def __init__(self):
+    # The two-step dance below is necessary to allow both opening existing
+    # file read/write and creating if not existing.  Vanilla open will truncate
+    # an existing file -or- allow creating if not existing.
+    f = os.open(constants.WATCHER_STATEFILE, os.O_RDWR | os.O_CREAT)
+    f = os.fdopen(f, 'w+')
+
+    try:
+      fcntl.flock(f.fileno(), fcntl.LOCK_EX|fcntl.LOCK_NB)
+    except IOError, x:
+      if x.errno == errno.EAGAIN:
+        raise StandardError('State file already locked')
+      raise
+
+    self.statefile = f
+    self.inst_map = {}
+
+    for line in f:
+      name, when, count = line.rstrip().split(':')
+
+      when = int(when)
+      count = int(count)
+
+      self.inst_map[name] = (when, count)
+
+  def NumberOfAttempts(self, instance):
+    """Returns number of previous restart attempts.
+
+    Args:
+      instance - the instance to look up.
+    """
+    assert self.statefile
+
+    if instance.name in self.inst_map:
+      return self.inst_map[instance.name][1]
+
+    return 0
+
+  def RecordAttempt(self, instance):
+    """Record a restart attempt.
+
+    Args:
+      instance - the instance being restarted
+    """
+    assert self.statefile
+
+    when = time.time()
+
+    self.inst_map[instance.name] = (when, 1 + self.NumberOfAttempts(instance))
+
+  def Remove(self, instance):
+    """Update state to reflect that a machine is running, i.e. remove record
+
+    Args:
+      instance - the instance to remove from books
+
+    This method removes the record for a named instance
+    """
+    assert self.statefile
+
+    if instance.name in self.inst_map:
+      del self.inst_map[instance.name]
+
+  def Save(self):
+    """Save records to file, then unlock and close file.
+    """
+    assert self.statefile
+
+    self.statefile.seek(0)
+    self.statefile.truncate()
+
+    for name in self.inst_map:
+      print >> self.statefile, "%s:%d:%d" % ((name,) + self.inst_map[name])
+
+    fcntl.flock(self.statefile.fileno(), fcntl.LOCK_UN)
+
+    self.statefile.close()
+    self.statefile = None
+
+
+class Instance(object):
+  """Abstraction for a Virtual Machine instance.
+
+  Methods:
+    Restart(): issue a command to restart the represented machine.
+  """
+  def __init__(self, name, state):
+    self.name = name
+    self.state = state
+
+  def Restart(self):
+    DoCmd(['gnt-instance', 'startup', '--lock-retries=15', self.name])
+
+
+class InstanceList(object):
+  """The set of Virtual Machine instances on a cluster.
+  """
+  cmd = ['gnt-instance', 'list', '--lock-retries=15',
+         '-o', 'name,admin_state,oper_state', '--no-headers', '--separator=:']
+
+  def __init__(self):
+    res = DoCmd(self.cmd)
+
+    lines = res.stdout.splitlines()
+
+    self.instances = []
+    for line in lines:
+      fields = [fld.strip() for fld in line.split(':')]
+
+      if len(fields) != 3:
+        continue
+      if fields[1] == "no": #no autostart, we don't care about this instance
+        continue
+      name, status = fields[0], fields[2]
+
+      self.instances.append(Instance(name, status))
+
+  def __iter__(self):
+    return self.instances.__iter__()
+
+
+class Message(object):
+  """Encapsulation of a notice or error message.
+  """
+  def __init__(self, level, msg):
+    self.level = level
+    self.msg = msg
+    self.when = time.time()
+
+  def __str__(self):
+    return self.level + ' ' + time.ctime(self.when) + '\n' + Indent(self.msg)
+
+
+class Restarter(object):
+  """Encapsulate the logic for restarting erronously halted virtual machines.
+
+  The calling program should periodically instantiate me and call Run().
+  This will traverse the list of instances, and make up to MAXTRIES attempts
+  to restart machines that are down.
+  """
+  def __init__(self):
+    self.instances = InstanceList()
+    self.messages = []
+
+  def Run(self):
+    """Make a pass over the list of instances, restarting downed ones.
+    """
+    notepad = RestarterState()
+
+    for instance in self.instances:
+      if instance.state in BAD_STATES:
+        n = notepad.NumberOfAttempts(instance)
+
+        if n > MAXTRIES:
+          # stay quiet.
+          continue
+        elif n < MAXTRIES:
+          last = " (Attempt #%d)" % (n + 1)
+        else:
+          notepad.RecordAttempt(instance)
+          self.messages.append(Message(ERROR, "Could not restart %s for %d"
+                                       " times, giving up..." %
+                                       (instance.name, MAXTRIES)))
+          continue
+        try:
+          self.messages.append(Message(NOTICE,
+                                       "Restarting %s%s." %
+                                       (instance.name, last)))
+          instance.Restart()
+        except Error, x:
+          self.messages.append(Message(ERROR, str(x)))
+
+        notepad.RecordAttempt(instance)
+      elif instance.state in HELPLESS_STATES:
+        if notepad.NumberOfAttempts(instance):
+          notepad.Remove(instance)
+      else:
+        if notepad.NumberOfAttempts(instance):
+          notepad.Remove(instance)
+          msg = Message(NOTICE,
+                        "Restart of %s succeeded." % instance.name)
+          self.messages.append(msg)
+
+    notepad.Save()
+
+  def WriteReport(self, logfile):
+    """
+    Log all messages to file.
+
+    Args:
+      logfile: file object open for writing (the log file)
+    """
+    for msg in self.messages:
+      print >> logfile, str(msg)
+
+
+def ParseOptions():
+  """Parse the command line options.
+
+  Returns:
+    (options, args) as from OptionParser.parse_args()
+
+  """
+  parser = OptionParser(description="Ganeti cluster watcher",
+                        usage="%prog [-d]",
+                        version="%%prog (ganeti) %s" %
+                        constants.RELEASE_VERSION)
+
+  parser.add_option("-d", "--debug", dest="debug",
+                    help="Don't redirect messages to the log file",
+                    default=False, action="store_true")
+  options, args = parser.parse_args()
+  return options, args
+
+
+def main():
+  """Main function.
+
+  """
+  options, args = ParseOptions()
+
+  if not options.debug:
+    sys.stderr = sys.stdout = open(LOGFILE, 'a')
+
+  try:
+    restarter = Restarter()
+    restarter.Run()
+    restarter.WriteReport(sys.stdout)
+  except Error, err:
+    print err
+
+if __name__ == '__main__':
+  main()
diff --git a/docs/Makefile.am b/docs/Makefile.am
new file mode 100644
index 0000000000000000000000000000000000000000..7a499c7222bb23416e51fea9863985446b704821
--- /dev/null
+++ b/docs/Makefile.am
@@ -0,0 +1,10 @@
+docdir = $(datadir)/doc/$(PACKAGE)
+
+dist_doc_DATA = hooks.html hooks.pdf
+EXTRA_DIST = hooks.sgml
+
+%.html: %.sgml
+	docbook2html --nochunks $<
+
+%.pdf: %.sgml
+	docbook2pdf $<
diff --git a/docs/hooks.sgml b/docs/hooks.sgml
new file mode 100644
index 0000000000000000000000000000000000000000..daa1d07e867c9c44f42678fb398f6497191f06a7
--- /dev/null
+++ b/docs/hooks.sgml
@@ -0,0 +1,566 @@
+<!DOCTYPE article PUBLIC "-//OASIS//DTD DocBook V4.2//EN" [
+]>
+  <article class="specification">
+  <articleinfo>
+    <title>Ganeti customisation using hooks</title>
+  </articleinfo>
+  <para>Documents ganeti version 1.2</para>
+  <section>
+    <title>Introduction</title>
+
+    <para>
+      In order to allow customisation of operations, ganeti will run
+      scripts under <filename
+      class="directory">/etc/ganeti/hooks</filename> based on certain
+      rules.
+    </para>
+
+      <para>This is similar to the <filename
+      class="directory">/etc/network/</filename> structure present in
+      Debian for network interface handling.</para>
+
+    </section>
+
+
+    <section>
+      <title>Organisation</title>
+
+      <para>For every operation, two sets of scripts are run:
+
+      <itemizedlist>
+          <listitem>
+            <simpara>pre phase (for authorization/checking)</simpara>
+          </listitem>
+          <listitem>
+            <simpara>post phase (for logging)</simpara>
+          </listitem>
+        </itemizedlist>
+      </para>
+
+      <para>Also, for each operation, the scripts are run on one or
+      more nodes, depending on the operation type.</para>
+
+      <para>Note that, even though we call them scripts, we are
+      actually talking about any executable.</para>
+
+      <section>
+        <title><emphasis>pre</emphasis> scripts</title>
+
+        <para>The <emphasis>pre</emphasis> scripts have a definite
+        target: to check that the operation is allowed given the
+        site-specific constraints. You could have, for example, a rule
+        that says every new instance is required to exists in a
+        database; to implement this, you could write a script that
+        checks the new instance parameters against your
+        database.</para>
+
+        <para>The objective of these scripts should be their return
+        code (zero or non-zero for success and failure). However, if
+        they modify the environment in any way, they should be
+        idempotent, as failed executions could be restarted and thus
+        the script(s) run again with exactly the same
+        parameters.</para>
+
+      </section>
+
+      <section>
+        <title><emphasis>post</emphasis> scripts</title>
+
+        <para>These scripts should do whatever you need as a reaction
+        to the completion of an operation. Their return code is not
+        checked (but logged), and they should not depend on the fact
+        that the <emphasis>pre</emphasis> scripts have been
+        run.</para>
+
+      </section>
+
+      <section>
+        <title>Naming</title>
+
+        <para>The allowed names for the scripts consist of (similar to
+        <citerefentry> <refentrytitle>run-parts</refentrytitle>
+        <manvolnum>8</manvolnum> </citerefentry>) upper and lower
+        case, digits, underscores and hyphens. In other words, the
+        regexp
+        <computeroutput>^[a-zA-Z0-9_-]+$</computeroutput>. Also,
+        non-executable scripts will be ignored.
+        </para>
+      </section>
+
+      <section>
+        <title>Order of execution</title>
+
+        <para>On a single node, the scripts in a directory are run in
+        lexicographic order (more exactly, the python string
+        comparison order). It is advisable to implement the usual
+        <emphasis>NN-name</emphasis> convention where
+        <emphasis>NN</emphasis> is a two digit number.</para>
+
+        <para>For an operation whose hooks are run on multiple nodes,
+        there is no specific ordering of nodes with regard to hooks
+        execution; you should assume that the scripts are run in
+        parallel on the target nodes (keeping on each node the above
+        specified ordering).  If you need any kind of inter-node
+        synchronisation, you have to implement it yourself in the
+        scripts.</para>
+
+      </section>
+
+      <section>
+        <title>Execution environment</title>
+
+        <para>The scripts will be run as follows:
+          <itemizedlist>
+          <listitem>
+            <simpara>no command line arguments</simpara>
+          </listitem>
+            <listitem>
+              <simpara>no controlling <acronym>tty</acronym></simpara>
+            </listitem>
+            <listitem>
+              <simpara><varname>stdin</varname> is
+              actually <filename>/dev/null</filename></simpara>
+            </listitem>
+            <listitem>
+              <simpara><varname>stdout</varname> and
+              <varname>stderr</varname> are directed to
+              files</simpara>
+            </listitem>
+          <listitem>
+            <simpara>the <varname>PATH</varname> is reset to
+            <literal>/sbin:/bin:/usr/sbin:/usr/bin</literal></simpara>
+          </listitem>
+          <listitem>
+            <simpara>the environment is cleared, and only
+            ganeti-specific variables will be left</simpara>
+          </listitem>
+          </itemizedlist>
+
+        </para>
+
+      <para>All informations about the cluster is passed using
+      environment variables. Different operations will have sligthly
+      different environments, but most of the variables are
+      common.</para>
+
+    </section>
+
+
+    <section>
+      <title>Operation list</title>
+      <table>
+        <title>Operation list</title>
+        <tgroup cols="7">
+          <colspec>
+          <colspec>
+          <colspec>
+          <colspec>
+          <colspec>
+          <colspec colname="prehooks">
+          <colspec colname="posthooks">
+          <spanspec namest="prehooks" nameend="posthooks"
+            spanname="bothhooks">
+          <thead>
+            <row>
+              <entry>Operation ID</entry>
+              <entry>Directory prefix</entry>
+              <entry>Description</entry>
+              <entry>Command</entry>
+              <entry>Supported env. variables</entry>
+              <entry><emphasis>pre</emphasis> hooks</entry>
+              <entry><emphasis>post</emphasis> hooks</entry>
+            </row>
+          </thead>
+          <tbody>
+            <row>
+              <entry>OP_INIT_CLUSTER</entry>
+              <entry><filename class="directory">cluster-init</filename></entry>
+              <entry>Initialises the cluster</entry>
+              <entry><computeroutput>gnt-cluster init</computeroutput></entry>
+              <entry><constant>CLUSTER</constant>, <constant>MASTER</constant></entry>
+              <entry spanname="bothhooks">master node, cluster name</entry>
+            </row>
+            <row>
+              <entry>OP_MASTER_FAILOVER</entry>
+              <entry><filename class="directory">master-failover</filename></entry>
+              <entry>Changes the master</entry>
+              <entry><computeroutput>gnt-cluster master-failover</computeroutput></entry>
+              <entry><constant>OLD_MASTER</constant>, <constant>NEW_MASTER</constant></entry>
+              <entry>the new master</entry>
+              <entry>all nodes</entry>
+            </row>
+            <row>
+              <entry>OP_ADD_NODE</entry>
+              <entry><filename class="directory">node-add</filename></entry>
+              <entry>Adds a new node to the cluster</entry>
+              <entry><computeroutput>gnt-node add</computeroutput></entry>
+              <entry><constant>NODE_NAME</constant>, <constant>NODE_PIP</constant>, <constant>NODE_SIP</constant></entry>
+              <entry>all existing nodes</entry>
+              <entry>all existing nodes plus the new node</entry>
+            </row>
+            <row>
+              <entry>OP_REMOVE_NODE</entry>
+              <entry><filename class="directory">node-remove</filename></entry>
+              <entry>Removes a node from the cluster</entry>
+              <entry><computeroutput>gnt-node remove</computeroutput></entry>
+              <entry><constant>NODE_NAME</constant></entry>
+              <entry spanname="bothhooks">all existing nodes except the removed node</entry>
+            </row>
+            <row>
+              <entry>OP_INSTANCE_ADD</entry>
+              <entry><filename class="directory">instance-add</filename></entry>
+              <entry>Creates a new instance</entry>
+              <entry><computeroutput>gnt-instance add</computeroutput></entry>
+              <entry><constant>INSTANCE_NAME</constant>, <constant>INSTANCE_PRIMARY</constant>, <constant>INSTANCE_SECONDARIES</constant>, <constant>DISK_TEMPLATE</constant>, <constant>MEM_SIZE</constant>, <constant>DISK_SIZE</constant>, <constant>SWAP_SIZE</constant>, <constant>VCPUS</constant>, <constant>INSTANCE_IP</constant>, <constant>INSTANCE_ADD_MODE</constant>, <constant>SRC_NODE</constant>, <constant>SRC_PATH</constant>, <constant>SRC_IMAGE</constant></entry>
+              <entry spanname="bothhooks" morerows="4">master node, primary and
+                   secondary nodes</entry>
+            </row>
+            <row>
+              <entry>OP_BACKUP_EXPORT</entry>
+              <entry><filename class="directory">instance-export</filename></entry>
+              <entry>Export the instance</entry>
+              <entry><computeroutput>gnt-backup export</computeroutput></entry>
+              <entry><constant>INSTANCE_NAME</constant>, <constant>EXPORT_NODE</constant>, <constant>EXPORT_DO_SHUTDOWN</constant></entry>
+            </row>
+            <row>
+              <entry>OP_INSTANCE_START</entry>
+              <entry><filename class="directory">instance-start</filename></entry>
+              <entry>Starts an instance</entry>
+              <entry><computeroutput>gnt-instance start</computeroutput></entry>
+              <entry><constant>INSTANCE_NAME</constant>, <constant>INSTANCE_PRIMARY</constant>, <constant>INSTANCE_SECONDARIES</constant>, <constant>FORCE</constant></entry>
+            </row>
+            <row>
+              <entry>OP_INSTANCE_SHUTDOWN</entry>
+              <entry><filename class="directory">instance-shutdown</filename></entry>
+              <entry>Stops an instance</entry>
+              <entry><computeroutput>gnt-instance shutdown</computeroutput></entry>
+              <entry><constant>INSTANCE_NAME</constant>, <constant>INSTANCE_PRIMARY</constant>, <constant>INSTANCE_SECONDARIES</constant></entry>
+            </row>
+            <row>
+              <entry>OP_INSTANCE_MODIFY</entry>
+              <entry><filename class="directory">instance-modify</filename></entry>
+              <entry>Modifies the instance parameters.</entry>
+              <entry><computeroutput>gnt-instance modify</computeroutput></entry>
+              <entry><constant>INSTANCE_NAME</constant>, <constant>MEM_SIZE</constant>, <constant>VCPUS</constant>, <constant>INSTANCE_IP</constant></entry>
+            </row>
+            <row>
+              <entry>OP_INSTANCE_FAILOVER</entry>
+              <entry><filename class="directory">instance-failover</filename></entry>
+              <entry>Failover an instance</entry>
+              <entry><computeroutput>gnt-instance start</computeroutput></entry>
+              <entry><constant>INSTANCE_NAME</constant>, <constant>INSTANCE_PRIMARY</constant>, <constant>INSTANCE_SECONDARIES</constant>, <constant>IGNORE_CONSISTENCY</constant></entry>
+            </row>
+            <row>
+              <entry>OP_INSTANCE_REMOVE</entry>
+              <entry><filename class="directory">instance-remove</filename></entry>
+              <entry>Remove an instance</entry>
+              <entry><computeroutput>gnt-instance remove</computeroutput></entry>
+              <entry><constant>INSTANCE_NAME</constant>, <constant>INSTANCE_PRIMARY</constant>, <constant>INSTANCE_SECONDARIES</constant></entry>
+            </row>
+            <row>
+              <entry>OP_INSTANCE_ADD_MDDRBD</entry>
+              <entry><filename class="directory">mirror-add</filename></entry>
+              <entry>Adds a mirror component</entry>
+              <entry><computeroutput>gnt-instance add-mirror</computeroutput></entry>
+              <entry><constant>INSTANCE_NAME</constant>, <constant>NEW_SECONDARY</constant>, <constant>DISK_NAME</constant></entry>
+            </row>
+            <row>
+              <entry>OP_INSTANCE_REMOVE_MDDRBD</entry>
+              <entry><filename class="directory">mirror-remove</filename></entry>
+              <entry>Removes a mirror component</entry>
+              <entry><computeroutput>gnt-instance remove-mirror</computeroutput></entry>
+              <entry><constant>INSTANCE_NAME</constant>, <constant>OLD_SECONDARY</constant>, <constant>DISK_NAME</constant>, <constant>DISK_ID</constant></entry>
+            </row>
+            <row>
+              <entry>OP_INSTANCE_REPLACE_DISKS</entry>
+              <entry><filename class="directory">mirror-replace</filename></entry>
+              <entry>Replace all mirror components</entry>
+              <entry><computeroutput>gnt-instance replace-disks</computeroutput></entry>
+              <entry><constant>INSTANCE_NAME</constant>, <constant>OLD_SECONDARY</constant>, <constant>NEW_SECONDARY</constant></entry>
+
+            </row>
+          </tbody>
+        </tgroup>
+      </table>
+    </section>
+
+    <section>
+      <title>Environment variables</title>
+
+      <para>Note that all variables listed here are actually prefixed
+      with <constant>GANETI_</constant> in order to provide a
+      different namespace.</para>
+
+      <section>
+        <title>Common variables</title>
+
+        <para>This is the list of environment variables supported by
+        all operations:</para>
+
+        <variablelist>
+          <varlistentry>
+            <term>HOOKS_VERSION</term>
+            <listitem>
+              <para>Documents the hooks interface version. In case this
+            doesnt match what the script expects, it should not
+            run. The documents conforms to the version
+            <literal>1</literal>.</para>
+            </listitem>
+          </varlistentry>
+          <varlistentry>
+            <term>HOOKS_PHASE</term>
+            <listitem>
+              <para>one of <constant>PRE</constant> or
+              <constant>POST</constant> denoting which phase are we
+              in.</para>
+            </listitem>
+          </varlistentry>
+          <varlistentry>
+            <term>CLUSTER</term>
+            <listitem>
+              <para>the cluster name</para>
+            </listitem>
+          </varlistentry>
+          <varlistentry>
+            <term>MASTER</term>
+            <listitem>
+              <para>the master node</para>
+            </listitem>
+          </varlistentry>
+          <varlistentry>
+            <term>OP_ID</term>
+            <listitem>
+              <para>one of the <constant>OP_*</constant> values from
+              the table of operations</para>
+            </listitem>
+          </varlistentry>
+          <varlistentry>
+            <term>OBJECT_TYPE</term>
+            <listitem>
+              <para>one of <simplelist type="inline">
+                  <member><constant>INSTANCE</constant></member>
+                  <member><constant>NODE</constant></member>
+                  <member><constant>CLUSTER</constant></member>
+                </simplelist>, showing the target of the operation.
+             </para>
+            </listitem>
+          </varlistentry>
+          <!-- commented out since it causes problems in our rpc
+               multi-node optimised calls
+          <varlistentry>
+            <term>HOST_NAME</term>
+            <listitem>
+              <para>The name of the node the hook is run on as known by
+            the cluster.</para>
+            </listitem>
+          </varlistentry>
+          <varlistentry>
+            <term>HOST_TYPE</term>
+            <listitem>
+              <para>one of <simplelist type="inline">
+                  <member><constant>MASTER</constant></member>
+                  <member><constant>NODE</constant></member>
+                </simplelist>, showing the role of this node in the cluster.
+             </para>
+            </listitem>
+          </varlistentry>
+          -->
+        </variablelist>
+      </section>
+
+      <section>
+        <title>Specialised variables</title>
+
+        <para>This is the list of variables which are specific to one
+        or more operations.</para>
+        <variablelist>
+          <varlistentry>
+            <term>INSTANCE_NAME</term>
+            <listitem>
+              <para>The name of the instance which is the target of
+              the operation.</para>
+            </listitem>
+          </varlistentry>
+          <varlistentry>
+            <term>INSTANCE_DISK_TYPE</term>
+            <listitem>
+              <para>The disk type for the instance.</para>
+            </listitem>
+          </varlistentry>
+          <varlistentry>
+            <term>INSTANCE_DISK_SIZE</term>
+            <listitem>
+              <para>The (OS) disk size for the instance.</para>
+            </listitem>
+          </varlistentry>
+          <varlistentry>
+            <term>INSTANCE_OS</term>
+            <listitem>
+              <para>The name of the instance OS.</para>
+            </listitem>
+          </varlistentry>
+          <varlistentry>
+            <term>INSTANCE_PRIMARY</term>
+            <listitem>
+              <para>The name of the node which is the primary for the
+              instance.</para>
+            </listitem>
+          </varlistentry>
+          <varlistentry>
+            <term>INSTANCE_SECONDARIES</term>
+            <listitem>
+              <para>Space-separated list of secondary nodes for the
+              instance.</para>
+            </listitem>
+          </varlistentry>
+          <varlistentry>
+            <term>NODE_NAME</term>
+            <listitem>
+              <para>The target node of this operation (not the node on
+              which the hook runs).</para>
+            </listitem>
+          </varlistentry>
+          <varlistentry>
+            <term>NODE_PIP</term>
+            <listitem>
+              <para>The primary IP of the target node (the one over
+              which inter-node communication is done).</para>
+            </listitem>
+          </varlistentry>
+          <varlistentry>
+            <term>NODE_SIP</term>
+            <listitem>
+              <para>The secondary IP of the target node (the one over
+              which drbd replication is done). This can be equal to
+              the primary ip, in case the cluster is not
+              dual-homed.</para>
+            </listitem>
+          </varlistentry>
+          <varlistentry>
+            <term>OLD_MASTER</term>
+            <term>NEW_MASTER</term>
+            <listitem>
+              <para>The old, respectively the new master for the
+              master failover operation.</para>
+            </listitem>
+          </varlistentry>
+          <varlistentry>
+            <term>FORCE</term>
+            <listitem>
+              <para>This is provided by some operations when the user
+              gave this flag.</para>
+            </listitem>
+          </varlistentry>
+          <varlistentry>
+            <term>IGNORE_CONSISTENCY</term>
+            <listitem>
+              <para>The user has specified this flag. It is used when
+              failing over instances in case the primary node is
+              down.</para>
+            </listitem>
+          </varlistentry>
+          <varlistentry>
+            <term>MEM_SIZE, DISK_SIZE, SWAP_SIZE, VCPUS</term>
+            <listitem>
+              <para>The memory, disk, swap size and the number of
+              processor selected for the instance (in
+              <command>gnt-instance add</command> or
+              <command>gnt-instance modify</command>).</para>
+            </listitem>
+          </varlistentry>
+          <varlistentry>
+            <term>INSTANCE_IP</term>
+            <listitem>
+              <para>If defined, the instance IP in the
+              <command>gnt-instance add</command> and
+              <command>gnt-instance set</command> commands. If not
+              defined, it means that no IP has been defined.</para>
+            </listitem>
+          </varlistentry>
+          <varlistentry>
+            <term>DISK_TEMPLATE</term>
+            <listitem>
+              <para>The disk template type when creating the instance.</para>
+            </listitem>
+          </varlistentry>
+          <varlistentry>
+            <term>INSTANCE_ADD_MODE</term>
+            <listitem>
+              <para>The mode of the create: either
+              <constant>create</constant> for create from scratch or
+              <constant>import</constant> for restoring from an
+              exported image.</para>
+            </listitem>
+          </varlistentry>
+          <varlistentry>
+            <term>SRC_NODE, SRC_PATH, SRC_IMAGE</term>
+            <listitem>
+              <para>In case the instance has been added by import,
+              these variables are defined and point to the source
+              node, source path (the directory containing the image
+              and the config file) and the source disk image
+              file.</para>
+            </listitem>
+          </varlistentry>
+          <varlistentry>
+            <term>DISK_NAME</term>
+            <listitem>
+              <para>The disk name (either <filename>sda</filename> or
+              <filename>sdb</filename>) in mirror operations
+              (add/remove mirror).</para>
+            </listitem>
+          </varlistentry>
+          <varlistentry>
+            <term>DISK_ID</term>
+            <listitem>
+              <para>The disk id for mirror remove operations. You can
+              look this up using <command>gnt-instance
+              info</command>.</para>
+            </listitem>
+          </varlistentry>
+          <varlistentry>
+            <term>NEW_SECONDARY</term>
+            <listitem>
+              <para>The name of the node on which the new mirror
+              componet is being added. This can be the name of the
+              current secondary, if the new mirror is on the same
+              secondary.</para>
+            </listitem>
+          </varlistentry>
+          <varlistentry>
+            <term>OLD_SECONDARY</term>
+            <listitem>
+              <para>The name of the old secondary. This is used in
+              both <command>replace-disks</command> and
+              <command>remove-mirror</command>. Note that this can be
+              equal to the new secondary (only
+              <command>replace-disks</command> has both variables) if
+              the secondary node hasn't actually changed).</para>
+            </listitem>
+          </varlistentry>
+          <varlistentry>
+            <term>EXPORT_NODE</term>
+            <listitem>
+              <para>The node on which the exported image of the
+              instance was done.</para>
+            </listitem>
+          </varlistentry>
+          <varlistentry>
+            <term>EXPORT_DO_SHUTDOWN</term>
+            <listitem>
+              <para>This variable tells if the instance has been
+              shutdown or not while doing the export. In the "was
+              shutdown" case, it's likely that the filesystem is
+              consistent, whereas in the "did not shutdown" case, the
+              filesystem would need a check (journal replay or full
+              fsck) in order to guarantee consistency.</para>
+            </listitem>
+          </varlistentry>
+        </variablelist>
+
+      </section>
+
+    </section>
+
+  </section>
+  </article>
diff --git a/ganeti.initd b/ganeti.initd
new file mode 100755
index 0000000000000000000000000000000000000000..a10fe996aabcf7a48078fd7ce2a04e27b9664da2
--- /dev/null
+++ b/ganeti.initd
@@ -0,0 +1,52 @@
+#! /bin/sh
+# ganeti node daemon starter script
+# based on skeleton from Debian GNU/Linux
+
+PATH=/sbin:/bin:/usr/sbin:/usr/bin
+DAEMON=/usr/local/sbin/ganeti-noded
+NAME=ganeti-noded
+SCRIPTNAME=/etc/init.d/ganeti
+DESC="Ganeti node daemon"
+
+test -f $DAEMON || exit 0
+
+set -e
+
+. /lib/lsb/init-functions
+
+check_config() {
+	for fname in /var/lib/ganeti/ssconf_node_pass /var/lib/ganeti/server.pem; do
+		if ! [ -f "$fname" ]; then
+			log_end_msg 0
+			log_warning_msg "Config $fname not there, will not run."
+			exit 0
+		fi
+	done
+}
+
+case "$1" in
+  start)
+	log_begin_msg "Starting $DESC..."
+	check_config
+	start-stop-daemon --start --quiet --exec $DAEMON || log_end_msg 1
+	log_end_msg 0
+	;;
+  stop)
+	log_begin_msg "Stopping $DESC..."
+	start-stop-daemon --stop --quiet --name $NAME || log_end_msg 1
+	log_end_msg 0
+	;;
+  restart|force-reload)
+	log_begin_msg "Reloading $DESC..."
+	start-stop-daemon --stop --quiet --oknodo --retry 30 --name $NAME
+	check_config
+	start-stop-daemon --start --quiet --exec $DAEMON || log_end_msg 1
+	log_end_msg 0
+	;;
+  *)
+	log_success_msg "Usage: $SCRIPTNAME {start|stop|force-reload|restart}"
+	exit 1
+	;;
+esac
+
+exit 0
diff --git a/lib/Makefile.am b/lib/Makefile.am
new file mode 100644
index 0000000000000000000000000000000000000000..02823595f58acf3ceb553b976d9bb9cada44d4b6
--- /dev/null
+++ b/lib/Makefile.am
@@ -0,0 +1,4 @@
+pkgpython_PYTHON = __init__.py backend.py cli.py cmdlib.py config.py \
+	objects.py errors.py logger.py ssh.py utils.py rpc.py \
+	bdev.py hypervisor.py opcodes.py mcpu.py constants.py \
+	ssconf.py
diff --git a/lib/__init__.py b/lib/__init__.py
new file mode 100644
index 0000000000000000000000000000000000000000..d0292bb0a3aaea6ff3b82d20e5cdf79367f0857f
--- /dev/null
+++ b/lib/__init__.py
@@ -0,0 +1,22 @@
+#!/usr/bin/python
+#
+
+# Copyright (C) 2006, 2007 Google Inc.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful, but
+# WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+# General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
+# 02110-1301, USA.
+
+
+# empty file for package definition
diff --git a/lib/backend.py b/lib/backend.py
new file mode 100644
index 0000000000000000000000000000000000000000..7b36e379a18e0fb669e0c0ecc3fe5b5567d70b6a
--- /dev/null
+++ b/lib/backend.py
@@ -0,0 +1,1337 @@
+#!/usr/bin/python
+#
+
+# Copyright (C) 2006, 2007 Google Inc.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful, but
+# WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+# General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
+# 02110-1301, USA.
+
+
+"""Functions used by the node daemon"""
+
+
+import os
+import os.path
+import shutil
+import time
+import tempfile
+import stat
+import errno
+import re
+import subprocess
+
+from ganeti import logger
+from ganeti import errors
+from ganeti import utils
+from ganeti import ssh
+from ganeti import hypervisor
+from ganeti import constants
+from ganeti import bdev
+from ganeti import objects
+
+
+def ListConfigFiles():
+  """Return a list of the config files present on the local node.
+  """
+
+  configfiles = []
+
+  for testfile in constants.MASTER_CONFIGFILES:
+    if os.path.exists(testfile):
+      configfiles.append(testfile)
+
+  for testfile in constants.NODE_CONFIGFILES:
+    if os.path.exists(testfile):
+      configfiles.append(testfile)
+
+  return configfiles
+
+
+def StartMaster():
+  """Activate local node as master node.
+
+  There are two needed steps for this:
+    - register the master init script, and also run it now
+    - register the cron script
+
+  """
+  result = utils.RunCmd(["update-rc.d", constants.MASTER_INITD_NAME,
+                         "defaults", "21", "79"])
+
+  if result.failed:
+    logger.Error("could not register the master init.d script with command"
+                 " %s, error %s" % (result.cmd, result.output))
+    return False
+
+  result = utils.RunCmd([constants.MASTER_INITD_SCRIPT, "start"])
+
+  if result.failed:
+    logger.Error("could not activate cluster interface with command %s,"
+                 " error %s" % (result.cmd, result.output))
+    return False
+
+  utils.RemoveFile(constants.MASTER_CRON_LINK)
+  os.symlink(constants.MASTER_CRON_FILE, constants.MASTER_CRON_LINK)
+  return True
+
+
+def StopMaster():
+  """Deactivate this node as master.
+
+  This does two things:
+    - remove links to master's startup script
+    - remove link to master cron script.
+
+  """
+  result = utils.RunCmd(["update-rc.d", "-f",
+                          constants.MASTER_INITD_NAME, "remove"])
+  if result.failed:
+    logger.Error("could not unregister the master script with command"
+                 " %s, error %s" % (result.cmd, result.output))
+    return False
+
+  output = utils.RunCmd([constants.MASTER_INITD_SCRIPT, "stop"])
+
+  if result.failed:
+    logger.Error("could not deactivate cluster interface with command %s,"
+                 " error %s" % (result.cmd, result.output))
+    return False
+
+  utils.RemoveFile(constants.MASTER_CRON_LINK)
+
+  return True
+
+
+def AddNode(dsa, dsapub, rsa, rsapub, ssh, sshpub):
+  """ adds the node to the cluster
+      - updates the hostkey
+      - adds the ssh-key
+      - sets the node id
+      - sets the node status to installed
+  """
+
+  f = open("/etc/ssh/ssh_host_rsa_key", 'w')
+  f.write(rsa)
+  f.close()
+
+  f = open("/etc/ssh/ssh_host_rsa_key.pub", 'w')
+  f.write(rsapub)
+  f.close()
+
+  f = open("/etc/ssh/ssh_host_dsa_key", 'w')
+  f.write(dsa)
+  f.close()
+
+  f = open("/etc/ssh/ssh_host_dsa_key.pub", 'w')
+  f.write(dsapub)
+  f.close()
+
+  if not os.path.isdir("/root/.ssh"):
+    os.mkdir("/root/.ssh")
+
+  f = open("/root/.ssh/id_dsa", 'w')
+  f.write(ssh)
+  f.close()
+
+  f = open("/root/.ssh/id_dsa.pub", 'w')
+  f.write(sshpub)
+  f.close()
+
+  f = open('/root/.ssh/id_dsa.pub', 'r')
+  try:
+    utils.AddAuthorizedKey('/root/.ssh/authorized_keys', f.read(8192))
+  finally:
+    f.close()
+
+  utils.RunCmd(["/etc/init.d/ssh", "restart"])
+
+  utils.RemoveFile("/root/.ssh/known_hosts")
+  return True
+
+
+def LeaveCluster():
+  """Cleans up the current node and prepares it to be removed from the cluster.
+
+  """
+  if os.path.exists(constants.DATA_DIR):
+    for dirpath, dirnames, filenames in os.walk(constants.DATA_DIR):
+      if dirpath == constants.DATA_DIR:
+        for i in filenames:
+          os.unlink(os.path.join(dirpath, i))
+  utils.RemoveFile(constants.CLUSTER_NAME_FILE)
+
+  f = open('/root/.ssh/id_dsa.pub', 'r')
+  try:
+    utils.RemoveAuthorizedKey('/root/.ssh/authorized_keys', f.read(8192))
+  finally:
+    f.close()
+
+  utils.RemoveFile('/root/.ssh/id_dsa')
+  utils.RemoveFile('/root/.ssh/id_dsa.pub')
+
+
+def GetNodeInfo(vgname):
+  """ gives back a hash with different informations
+  about the node
+
+  Returns:
+    { 'vg_size' : xxx,  'vg_free' : xxx, 'memory_domain0': xxx,
+      'memory_free' : xxx, 'memory_total' : xxx }
+    where
+    vg_size is the size of the configured volume group in MiB
+    vg_free is the free size of the volume group in MiB
+    memory_dom0 is the memory allocated for domain0 in MiB
+    memory_free is the currently available (free) ram in MiB
+    memory_total is the total number of ram in MiB
+  """
+
+  outputarray = {}
+  vginfo = _GetVGInfo(vgname)
+  outputarray['vg_size'] = vginfo['vg_size']
+  outputarray['vg_free'] = vginfo['vg_free']
+
+  hyper = hypervisor.GetHypervisor()
+  hyp_info = hyper.GetNodeInfo()
+  if hyp_info is not None:
+    outputarray.update(hyp_info)
+
+  return outputarray
+
+
+def VerifyNode(what):
+  """Verify the status of the local node.
+
+  Args:
+    what - a dictionary of things to check:
+      'filelist' : list of files for which to compute checksums
+      'nodelist' : list of nodes we should check communication with
+      'hypervisor': run the hypervisor-specific verify
+
+  Requested files on local node are checksummed and the result returned.
+
+  The nodelist is traversed, with the following checks being made
+  for each node:
+  - known_hosts key correct
+  - correct resolving of node name (target node returns its own hostname
+    by ssh-execution of 'hostname', result compared against name in list.
+
+  """
+
+  result = {}
+
+  if 'hypervisor' in what:
+    result['hypervisor'] = hypervisor.GetHypervisor().Verify()
+
+  if 'filelist' in what:
+    result['filelist'] = utils.FingerprintFiles(what['filelist'])
+
+  if 'nodelist' in what:
+    result['nodelist'] = {}
+    for node in what['nodelist']:
+      success, message = ssh.VerifyNodeHostname(node)
+      if not success:
+        result['nodelist'][node] = message
+  return result
+
+
+def GetVolumeList(vg_name):
+  """Compute list of logical volumes and their size.
+
+  Returns:
+    dictionary of all partions (key) with their size:
+    test1: 20.06MiB
+
+  """
+  result = utils.RunCmd(["lvs", "--noheadings", "--units=m",
+                         "-oname,size", vg_name])
+  if result.failed:
+    logger.Error("Failed to list logical volumes, lvs output: %s" %
+                 result.output)
+    return {}
+
+  lvlist = [line.split() for line in result.output.splitlines()]
+  return dict(lvlist)
+
+
+def ListVolumeGroups():
+  """List the volume groups and their size
+
+  Returns:
+    Dictionary with keys volume name and values the size of the volume
+
+  """
+  return utils.ListVolumeGroups()
+
+
+def BridgesExist(bridges_list):
+  """Check if a list of bridges exist on the current node
+
+  Returns:
+    True if all of them exist, false otherwise
+
+  """
+  for bridge in bridges_list:
+    if not utils.BridgeExists(bridge):
+      return False
+
+  return True
+
+
+def GetInstanceList():
+  """ provides a list of instances
+
+  Returns:
+    A list of all running instances on the current node
+    - instance1.example.com
+    - instance2.example.com
+  """
+
+  try:
+    names = hypervisor.GetHypervisor().ListInstances()
+  except errors.HypervisorError, err:
+    logger.Error("error enumerating instances: %s" % str(err))
+    raise
+
+  return names
+
+
+def GetInstanceInfo(instance):
+  """ gives back the informations about an instance
+  as a dictonary
+
+  Args:
+    instance: name of the instance (ex. instance1.example.com)
+
+  Returns:
+    { 'memory' : 511, 'state' : '-b---', 'time' : 3188.8, }
+    where
+    memory: memory size of instance (int)
+    state: xen state of instance (string)
+    time: cpu time of instance (float)
+  """
+
+  output = {}
+
+  iinfo = hypervisor.GetHypervisor().GetInstanceInfo(instance)
+  if iinfo is not None:
+    output['memory'] = iinfo[2]
+    output['state'] = iinfo[4]
+    output['time'] = iinfo[5]
+
+  return output
+
+
+def GetAllInstancesInfo():
+  """Gather data about all instances.
+
+  This is the equivalent of `GetInstanceInfo()`, except that it
+  computes data for all instances at once, thus being faster if one
+  needs data about more than one instance.
+
+  Returns: a dictionary of dictionaries, keys being the instance name,
+    and with values:
+    { 'memory' : 511, 'state' : '-b---', 'time' : 3188.8, }
+    where
+    memory: memory size of instance (int)
+    state: xen state of instance (string)
+    time: cpu time of instance (float)
+    vcpus: the number of cpus
+  """
+
+  output = {}
+
+  iinfo = hypervisor.GetHypervisor().GetAllInstancesInfo()
+  if iinfo:
+    for name, id, memory, vcpus, state, times in iinfo:
+      output[name] = {
+        'memory': memory,
+        'vcpus': vcpus,
+        'state': state,
+        'time': times,
+        }
+
+  return output
+
+
+def AddOSToInstance(instance, os_disk, swap_disk):
+  """Add an os to an instance.
+
+  Args:
+    instance: the instance object
+    os_disk: the instance-visible name of the os device
+    swap_disk: the instance-visible name of the swap device
+
+  """
+  inst_os = OSFromDisk(instance.os)
+
+  create_script = inst_os.create_script
+
+  for os_device in instance.disks:
+    if os_device.iv_name == os_disk:
+      break
+  else:
+    logger.Error("Can't find this device-visible name '%s'" % os_disk)
+    return False
+
+  for swap_device in instance.disks:
+    if swap_device.iv_name == swap_disk:
+      break
+  else:
+    logger.Error("Can't find this device-visible name '%s'" % swap_disk)
+    return False
+
+  real_os_dev = _RecursiveFindBD(os_device)
+  if real_os_dev is None:
+    raise errors.BlockDeviceError("Block device '%s' is not set up" %
+                                  str(os_device))
+  real_os_dev.Open()
+
+  real_swap_dev = _RecursiveFindBD(swap_device)
+  if real_swap_dev is None:
+    raise errors.BlockDeviceError("Block device '%s' is not set up" %
+                                  str(swap_device))
+  real_swap_dev.Open()
+
+  logfile = "%s/add-%s-%s-%d.log" % (constants.LOG_OS_DIR, instance.os,
+                                     instance.name, int(time.time()))
+  if not os.path.exists(constants.LOG_OS_DIR):
+    os.mkdir(constants.LOG_OS_DIR, 0750)
+
+  command = utils.BuildShellCmd("cd %s; %s -i %s -b %s -s %s &>%s",
+                                inst_os.path, create_script, instance.name,
+                                real_os_dev.dev_path, real_swap_dev.dev_path,
+                                logfile)
+
+  result = utils.RunCmd(command)
+
+  if result.failed:
+    logger.Error("os create command '%s' returned error: %s"
+                 " output: %s" %
+                 (command, result.fail_reason, result.output))
+    return False
+
+  return True
+
+
+def _GetVGInfo(vg_name):
+  """Get informations about the volume group.
+
+  Args:
+    vg_name: the volume group
+
+  Returns:
+    { 'vg_size' : xxx, 'vg_free' : xxx, 'pv_count' : xxx }
+    where
+    vg_size is the total size of the volume group in MiB
+    vg_free is the free size of the volume group in MiB
+    pv_count are the number of physical disks in that vg
+
+  """
+  retval = utils.RunCmd(["vgs", "-ovg_size,vg_free,pv_count", "--noheadings",
+                         "--nosuffix", "--units=m", "--separator=:", vg_name])
+
+  if retval.failed:
+    errmsg = "volume group %s not present" % vg_name
+    logger.Error(errmsg)
+    raise errors.LVMError(errmsg)
+  valarr = retval.stdout.strip().split(':')
+  retdic = {
+    "vg_size": int(round(float(valarr[0]), 0)),
+    "vg_free": int(round(float(valarr[1]), 0)),
+    "pv_count": int(valarr[2]),
+    }
+  return retdic
+
+
+def _GatherBlockDevs(instance):
+  """Set up an instance's block device(s).
+
+  This is run on the primary node at instance startup. The block
+  devices must be already assembled.
+
+  """
+  block_devices = []
+  for disk in instance.disks:
+    device = _RecursiveFindBD(disk)
+    if device is None:
+      raise errors.BlockDeviceError("Block device '%s' is not set up." %
+                                    str(disk))
+    device.Open()
+    block_devices.append((disk, device))
+  return block_devices
+
+
+def StartInstance(instance, extra_args):
+  """Start an instance.
+
+  Args:
+    instance - name of instance to start.
+  """
+
+  running_instances = GetInstanceList()
+
+  if instance.name in running_instances:
+    return True
+
+  block_devices = _GatherBlockDevs(instance)
+  hyper = hypervisor.GetHypervisor()
+
+  try:
+    hyper.StartInstance(instance, block_devices, extra_args)
+  except errors.HypervisorError, err:
+    logger.Error("Failed to start instance: %s" % err)
+    return False
+
+  return True
+
+
+def ShutdownInstance(instance):
+  """Shut an instance down.
+
+  Args:
+    instance - name of instance to shutdown.
+  """
+
+  running_instances = GetInstanceList()
+
+  if instance.name not in running_instances:
+    return True
+
+  hyper = hypervisor.GetHypervisor()
+  try:
+    hyper.StopInstance(instance)
+  except errors.HypervisorError, err:
+    logger.Error("Failed to stop instance: %s" % err)
+    return False
+
+  # test every 10secs for 2min
+  shutdown_ok = False
+
+  time.sleep(1)
+  for dummy in range(11):
+    if instance.name not in GetInstanceList():
+      break
+    time.sleep(10)
+  else:
+    # the shutdown did not succeed
+    logger.Error("shutdown of '%s' unsuccessful, using destroy" % instance)
+
+    try:
+      hyper.StopInstance(instance, force=True)
+    except errors.HypervisorError, err:
+      logger.Error("Failed to stop instance: %s" % err)
+      return False
+
+    time.sleep(1)
+    if instance.name in GetInstanceList():
+      logger.Error("could not shutdown instance '%s' even by destroy")
+      return False
+
+  return True
+
+
+def CreateBlockDevice(disk, size, on_primary):
+  """Creates a block device for an instance.
+
+  Args:
+   bdev: a ganeti.objects.Disk object
+   size: the size of the physical underlying devices
+   do_open: if the device should be `Assemble()`-d and
+            `Open()`-ed after creation
+
+  Returns:
+    the new unique_id of the device (this can sometime be
+    computed only after creation), or None. On secondary nodes,
+    it's not required to return anything.
+
+  """
+  clist = []
+  if disk.children:
+    for child in disk.children:
+      crdev = _RecursiveAssembleBD(child, on_primary)
+      if on_primary or disk.AssembleOnSecondary():
+        # we need the children open in case the device itself has to
+        # be assembled
+        crdev.Open()
+      else:
+        crdev.Close()
+      clist.append(crdev)
+  try:
+    device = bdev.FindDevice(disk.dev_type, disk.physical_id, clist)
+    if device is not None:
+      logger.Info("removing existing device %s" % disk)
+      device.Remove()
+  except errors.BlockDeviceError, err:
+    pass
+
+  device = bdev.Create(disk.dev_type, disk.physical_id,
+                       clist, size)
+  if device is None:
+    raise ValueError("Can't create child device for %s, %s" %
+                     (disk, size))
+  if on_primary or disk.AssembleOnSecondary():
+    device.Assemble()
+    device.SetSyncSpeed(30*1024)
+    if on_primary or disk.OpenOnSecondary():
+      device.Open(force=True)
+  physical_id = device.unique_id
+  return physical_id
+
+
+def RemoveBlockDevice(disk):
+  """Remove a block device.
+
+  This is intended to be called recursively.
+
+  """
+  try:
+    # since we are removing the device, allow a partial match
+    # this allows removal of broken mirrors
+    rdev = _RecursiveFindBD(disk, allow_partial=True)
+  except errors.BlockDeviceError, err:
+    # probably can't attach
+    logger.Info("Can't attach to device %s in remove" % disk)
+    rdev = None
+  if rdev is not None:
+    result = rdev.Remove()
+  else:
+    result = True
+  if disk.children:
+    for child in disk.children:
+      result = result and RemoveBlockDevice(child)
+  return result
+
+
+def _RecursiveAssembleBD(disk, as_primary):
+  """Activate a block device for an instance.
+
+  This is run on the primary and secondary nodes for an instance.
+
+  This function is called recursively.
+
+  Args:
+    disk: a objects.Disk object
+    as_primary: if we should make the block device read/write
+
+  Returns:
+    the assembled device or None (in case no device was assembled)
+
+  If the assembly is not successful, an exception is raised.
+
+  """
+  children = []
+  if disk.children:
+    for chld_disk in disk.children:
+      children.append(_RecursiveAssembleBD(chld_disk, as_primary))
+
+  if as_primary or disk.AssembleOnSecondary():
+    r_dev = bdev.AttachOrAssemble(disk.dev_type, disk.physical_id, children)
+    r_dev.SetSyncSpeed(30*1024)
+    result = r_dev
+    if as_primary or disk.OpenOnSecondary():
+      r_dev.Open()
+    else:
+      r_dev.Close()
+  else:
+    result = True
+  return result
+
+
+def AssembleBlockDevice(disk, as_primary):
+  """Activate a block device for an instance.
+
+  This is a wrapper over _RecursiveAssembleBD.
+
+  Returns:
+    a /dev path for primary nodes
+    True for secondary nodes
+
+  """
+  result = _RecursiveAssembleBD(disk, as_primary)
+  if isinstance(result, bdev.BlockDev):
+    result = result.dev_path
+  return result
+
+
+def ShutdownBlockDevice(disk):
+  """Shut down a block device.
+
+  First, if the device is assembled (can `Attach()`), then the device
+  is shutdown. Then the children of the device are shutdown.
+
+  This function is called recursively. Note that we don't cache the
+  children or such, as oppossed to assemble, shutdown of different
+  devices doesn't require that the upper device was active.
+
+  """
+  r_dev = _RecursiveFindBD(disk)
+  if r_dev is not None:
+    result = r_dev.Shutdown()
+  else:
+    result = True
+  if disk.children:
+    for child in disk.children:
+      result = result and ShutdownBlockDevice(child)
+  return result
+
+
+def MirrorAddChild(md_cdev, new_cdev):
+  """Extend an MD raid1 array.
+
+  """
+  md_bdev = _RecursiveFindBD(md_cdev, allow_partial=True)
+  if md_bdev is None:
+    logger.Error("Can't find md device")
+    return False
+  new_bdev = _RecursiveFindBD(new_cdev)
+  if new_bdev is None:
+    logger.Error("Can't find new device to add")
+    return False
+  new_bdev.Open()
+  md_bdev.AddChild(new_bdev)
+  return True
+
+
+def MirrorRemoveChild(md_cdev, new_cdev):
+  """Reduce an MD raid1 array.
+
+  """
+  md_bdev = _RecursiveFindBD(md_cdev)
+  if md_bdev is None:
+    return False
+  new_bdev = _RecursiveFindBD(new_cdev)
+  if new_bdev is None:
+    return False
+  new_bdev.Open()
+  md_bdev.RemoveChild(new_bdev.dev_path)
+  return True
+
+
+def GetMirrorStatus(disks):
+  """Get the mirroring status of a list of devices.
+
+  Args:
+    disks: list of `objects.Disk`
+
+  Returns:
+    list of (mirror_done, estimated_time) tuples, which
+    are the result of bdev.BlockDevice.CombinedSyncStatus()
+
+  """
+  stats = []
+  for dsk in disks:
+    rbd = _RecursiveFindBD(dsk)
+    if rbd is None:
+      raise errors.BlockDeviceError, "Can't find device %s" % str(dsk)
+    stats.append(rbd.CombinedSyncStatus())
+  return stats
+
+
+def _RecursiveFindBD(disk, allow_partial=False):
+  """Check if a device is activated.
+
+  If so, return informations about the real device.
+
+  Args:
+    disk: the objects.Disk instance
+    allow_partial: don't abort the find if a child of the
+                   device can't be found; this is intended to be
+                   used when repairing mirrors
+
+  Returns:
+    None if the device can't be found
+    otherwise the device instance
+
+  """
+  children = []
+  if disk.children:
+    for chdisk in disk.children:
+      children.append(_RecursiveFindBD(chdisk))
+
+  return bdev.FindDevice(disk.dev_type, disk.physical_id, children)
+
+
+def FindBlockDevice(disk):
+  """Check if a device is activated.
+
+  If so, return informations about the real device.
+
+  Args:
+    disk: the objects.Disk instance
+  Returns:
+    None if the device can't be found
+    (device_path, major, minor, sync_percent, estimated_time, is_degraded)
+
+  """
+  rbd = _RecursiveFindBD(disk)
+  if rbd is None:
+    return rbd
+  sync_p, est_t, is_degr = rbd.GetSyncStatus()
+  return rbd.dev_path, rbd.major, rbd.minor, sync_p, est_t, is_degr
+
+
+def UploadFile(file_name, data, mode, uid, gid, atime, mtime):
+  """Write a file to the filesystem.
+
+  This allows the master to overwrite(!) a file. It will only perform
+  the operation if the file belongs to a list of configuration files.
+
+  """
+  if not os.path.isabs(file_name):
+    logger.Error("Filename passed to UploadFile is not absolute: '%s'" %
+                 file_name)
+    return False
+
+  if file_name not in [constants.CLUSTER_CONF_FILE, "/etc/hosts",
+                       "/etc/ssh/ssh_known_hosts"]:
+    logger.Error("Filename passed to UploadFile not in allowed"
+                 " upload targets: '%s'" % file_name)
+    return False
+
+  dir_name, small_name = os.path.split(file_name)
+  fd, new_name = tempfile.mkstemp('.new', small_name, dir_name)
+  # here we need to make sure we remove the temp file, if any error
+  # leaves it in place
+  try:
+    os.chown(new_name, uid, gid)
+    os.chmod(new_name, mode)
+    os.write(fd, data)
+    os.fsync(fd)
+    os.utime(new_name, (atime, mtime))
+    os.rename(new_name, file_name)
+  finally:
+    os.close(fd)
+    utils.RemoveFile(new_name)
+  return True
+
+def _ErrnoOrStr(err):
+  """Format an EnvironmentError exception.
+
+  If the `err` argument has an errno attribute, it will be looked up
+  and converted into a textual EXXXX description. Otherwise the string
+  representation of the error will be returned.
+
+  """
+  if hasattr(err, 'errno'):
+    detail = errno.errorcode[err.errno]
+  else:
+    detail = str(err)
+  return detail
+
+
+def _OSOndiskVersion(name, os_dir=None):
+  """Compute and return the api version of a given OS.
+
+  This function will try to read the api version of the os given by
+  the 'name' parameter. By default, it wil use the constants.OS_DIR
+  as top-level directory for OSes, but this can be overriden by the
+  use of the os_dir parameter. Return value will be either an
+  integer denoting the version or None in the case when this is not
+  a valid OS name.
+
+  """
+  if os_dir is None:
+    os_dir = os.path.sep.join([constants.OS_DIR, name])
+
+  api_file = os.path.sep.join([os_dir, "ganeti_api_version"])
+
+  try:
+    st = os.stat(api_file)
+  except EnvironmentError, err:
+    raise errors.InvalidOS, (name, "'ganeti_api_version' file not"
+                             " found (%s)" % _ErrnoOrStr(err))
+
+  if not stat.S_ISREG(stat.S_IFMT(st.st_mode)):
+    raise errors.InvalidOS, (name, "'ganeti_api_version' file is not"
+                             " a regular file")
+
+  try:
+    f = open(api_file)
+    try:
+      api_version = f.read(256)
+    finally:
+      f.close()
+  except EnvironmentError, err:
+    raise errors.InvalidOS, (name, "error while reading the"
+                             " API version (%s)" % _ErrnoOrStr(err))
+
+  api_version = api_version.strip()
+  try:
+    api_version = int(api_version)
+  except (TypeError, ValueError), err:
+    raise errors.InvalidOS, (name, "API version is not integer (%s)" %
+                             str(err))
+
+  return api_version
+
+def DiagnoseOS(top_dir=None):
+  """Compute the validity for all OSes.
+
+  For each name in the give top_dir parameter (if not given, defaults
+  to constants.OS_DIR), it will return an object. If this is a valid
+  os, the object will be an instance of the object.OS class. If not,
+  it will be an instance of errors.InvalidOS and this signifies that
+  this name does not correspond to a valid OS.
+
+  Returns:
+    list of objects
+
+  """
+  if top_dir is None:
+    top_dir = constants.OS_DIR
+
+  try:
+    f_names = os.listdir(top_dir)
+  except EnvironmentError, err:
+    logger.Error("Can't list the OS directory: %s" % str(err))
+    return False
+  result = []
+  for name in f_names:
+    try:
+      os_inst = OSFromDisk(name, os.path.sep.join([top_dir, name]))
+      result.append(os_inst)
+    except errors.InvalidOS, err:
+      result.append(err)
+
+  return result
+
+
+def OSFromDisk(name, os_dir=None):
+  """Create an OS instance from disk.
+
+  This function will return an OS instance if the given name is a
+  valid OS name. Otherwise, it will raise an appropriate
+  `errors.InvalidOS` exception, detailing why this is not a valid
+  OS.
+
+  """
+  if os_dir is None:
+    os_dir = os.path.sep.join([constants.OS_DIR, name])
+
+  api_version = _OSOndiskVersion(name, os_dir)
+
+  if api_version != constants.OS_API_VERSION:
+    raise errors.InvalidOS, (name, "API version mismatch (found %s want %s)"
+                             % (api_version, constants.OS_API_VERSION))
+
+  # OS Scripts dictionary, we will populate it with the actual script names
+  os_scripts = {'create': '', 'export': '', 'import': ''}
+
+  for script in os_scripts:
+    os_scripts[script] = os.path.sep.join([os_dir, script])
+
+    try:
+      st = os.stat(os_scripts[script])
+    except EnvironmentError, err:
+      raise errors.InvalidOS, (name, "'%s' script missing (%s)" %
+                               (script, _ErrnoOrStr(err)))
+
+    if stat.S_IMODE(st.st_mode) & stat.S_IXUSR != stat.S_IXUSR:
+      raise errors.InvalidOS, (name, "'%s' script not executable" % script)
+
+    if not stat.S_ISREG(stat.S_IFMT(st.st_mode)):
+      raise errors.InvalidOS, (name, "'%s' is not a regular file" % script)
+
+
+  return objects.OS(name=name, path=os_dir,
+                    create_script=os_scripts['create'],
+                    export_script=os_scripts['export'],
+                    import_script=os_scripts['import'],
+                    api_version=api_version)
+
+
+def SnapshotBlockDevice(disk):
+  """Create a snapshot copy of a block device.
+
+  This function is called recursively, and the snapshot is actually created
+  just for the leaf lvm backend device.
+
+  Args:
+    disk: the disk to be snapshotted
+
+  Returns:
+    a config entry for the actual lvm device snapshotted.
+  """
+
+  if disk.children:
+    if len(disk.children) == 1:
+      # only one child, let's recurse on it
+      return SnapshotBlockDevice(disk.children[0])
+    else:
+      # more than one child, choose one that matches
+      for child in disk.children:
+        if child.size == disk.size:
+          # return implies breaking the loop
+          return SnapshotBlockDevice(child)
+  elif disk.dev_type == "lvm":
+    r_dev = _RecursiveFindBD(disk)
+    if r_dev is not None:
+      # let's stay on the safe side and ask for the full size, for now
+      return r_dev.Snapshot(disk.size)
+    else:
+      return None
+  else:
+    raise errors.ProgrammerError, ("Cannot snapshot non-lvm block device"
+                                   "'%s' of type '%s'" %
+                                   (disk.unique_id, disk.dev_type))
+
+
+def ExportSnapshot(disk, dest_node, instance):
+  """Export a block device snapshot to a remote node.
+
+  Args:
+    disk: the snapshot block device
+    dest_node: the node to send the image to
+    instance: instance being exported
+
+  Returns:
+    True if successful, False otherwise.
+  """
+
+  inst_os = OSFromDisk(instance.os)
+  export_script = inst_os.export_script
+
+  logfile = "%s/exp-%s-%s-%s.log" % (constants.LOG_OS_DIR, inst_os.name,
+                                     instance.name, int(time.time()))
+  if not os.path.exists(constants.LOG_OS_DIR):
+    os.mkdir(constants.LOG_OS_DIR, 0750)
+
+  real_os_dev = _RecursiveFindBD(disk)
+  if real_os_dev is None:
+    raise errors.BlockDeviceError("Block device '%s' is not set up" %
+                                  str(disk))
+  real_os_dev.Open()
+
+  destdir = os.path.join(constants.EXPORT_DIR, instance.name + ".new")
+  destfile = disk.physical_id[1]
+
+  # the target command is built out of three individual commands,
+  # which are joined by pipes; we check each individual command for
+  # valid parameters
+
+  expcmd = utils.BuildShellCmd("cd %s; %s -i %s -b %s 2>%s", inst_os.path,
+                               export_script, instance.name,
+                               real_os_dev.dev_path, logfile)
+
+  comprcmd = "gzip"
+
+  remotecmd = utils.BuildShellCmd("ssh -q -oStrictHostKeyChecking=yes"
+                                  " -oBatchMode=yes -oEscapeChar=none"
+                                  " %s 'mkdir -p %s; cat > %s/%s'",
+                                  dest_node, destdir, destdir, destfile)
+
+  # all commands have been checked, so we're safe to combine them
+  command = '|'.join([expcmd, comprcmd, remotecmd])
+
+  result = utils.RunCmd(command)
+
+  if result.failed:
+    logger.Error("os snapshot export command '%s' returned error: %s"
+                 " output: %s" %
+                 (command, result.fail_reason, result.output))
+    return False
+
+  return True
+
+
+def FinalizeExport(instance, snap_disks):
+  """Write out the export configuration information.
+
+  Args:
+    instance: instance configuration
+    snap_disks: snapshot block devices
+
+  Returns:
+    False in case of error, True otherwise.
+  """
+
+  destdir = os.path.join(constants.EXPORT_DIR, instance.name + ".new")
+  finaldestdir = os.path.join(constants.EXPORT_DIR, instance.name)
+
+  config = objects.SerializableConfigParser()
+
+  config.add_section(constants.INISECT_EXP)
+  config.set(constants.INISECT_EXP, 'version', '0')
+  config.set(constants.INISECT_EXP, 'timestamp', '%d' % int(time.time()))
+  config.set(constants.INISECT_EXP, 'source', instance.primary_node)
+  config.set(constants.INISECT_EXP, 'os', instance.os)
+  config.set(constants.INISECT_EXP, 'compression', 'gzip')
+
+  config.add_section(constants.INISECT_INS)
+  config.set(constants.INISECT_INS, 'name', instance.name)
+  config.set(constants.INISECT_INS, 'memory', '%d' % instance.memory)
+  config.set(constants.INISECT_INS, 'vcpus', '%d' % instance.vcpus)
+  config.set(constants.INISECT_INS, 'disk_template', instance.disk_template)
+  for nic_count, nic in enumerate(instance.nics):
+    config.set(constants.INISECT_INS, 'nic%d_mac' %
+               nic_count, '%s' % nic.mac)
+    config.set(constants.INISECT_INS, 'nic%d_ip' % nic_count, '%s' % nic.ip)
+  # TODO: redundant: on load can read nics until it doesn't exist
+  config.set(constants.INISECT_INS, 'nic_count' , '%d' % nic_count)
+
+  for disk_count, disk in enumerate(snap_disks):
+    config.set(constants.INISECT_INS, 'disk%d_ivname' % disk_count,
+               ('%s' % disk.iv_name))
+    config.set(constants.INISECT_INS, 'disk%d_dump' % disk_count,
+               ('%s' % disk.physical_id[1]))
+    config.set(constants.INISECT_INS, 'disk%d_size' % disk_count,
+               ('%d' % disk.size))
+  config.set(constants.INISECT_INS, 'disk_count' , '%d' % disk_count)
+
+  cff = os.path.join(destdir, constants.EXPORT_CONF_FILE)
+  cfo = open(cff, 'w')
+  try:
+    config.write(cfo)
+  finally:
+    cfo.close()
+
+  shutil.rmtree(finaldestdir, True)
+  shutil.move(destdir, finaldestdir)
+
+  return True
+
+
+def ExportInfo(dest):
+  """Get export configuration information.
+
+  Args:
+    dest: directory containing the export
+
+  Returns:
+    A serializable config file containing the export info.
+
+  """
+
+  cff = os.path.join(dest, constants.EXPORT_CONF_FILE)
+
+  config = objects.SerializableConfigParser()
+  config.read(cff)
+
+  if (not config.has_section(constants.INISECT_EXP) or
+      not config.has_section(constants.INISECT_INS)):
+    return None
+
+  return config
+
+
+def ImportOSIntoInstance(instance, os_disk, swap_disk, src_node, src_image):
+  """Import an os image into an instance.
+
+  Args:
+    instance: the instance object
+    os_disk: the instance-visible name of the os device
+    swap_disk: the instance-visible name of the swap device
+    src_node: node holding the source image
+    src_image: path to the source image on src_node
+
+  Returns:
+    False in case of error, True otherwise.
+
+  """
+
+  inst_os = OSFromDisk(instance.os)
+  import_script = inst_os.import_script
+
+  for os_device in instance.disks:
+    if os_device.iv_name == os_disk:
+      break
+  else:
+    logger.Error("Can't find this device-visible name '%s'" % os_disk)
+    return False
+
+  for swap_device in instance.disks:
+    if swap_device.iv_name == swap_disk:
+      break
+  else:
+    logger.Error("Can't find this device-visible name '%s'" % swap_disk)
+    return False
+
+  real_os_dev = _RecursiveFindBD(os_device)
+  if real_os_dev is None:
+    raise errors.BlockDeviceError, ("Block device '%s' is not set up" %
+                                    str(os_device))
+  real_os_dev.Open()
+
+  real_swap_dev = _RecursiveFindBD(swap_device)
+  if real_swap_dev is None:
+    raise errors.BlockDeviceError, ("Block device '%s' is not set up" %
+                                    str(swap_device))
+  real_swap_dev.Open()
+
+  logfile = "%s/import-%s-%s-%s.log" % (constants.LOG_OS_DIR, instance.os,
+                                        instance.name, int(time.time()))
+  if not os.path.exists(constants.LOG_OS_DIR):
+    os.mkdir(constants.LOG_OS_DIR, 0750)
+
+  remotecmd = utils.BuildShellCmd("ssh -q -oStrictHostKeyChecking=yes"
+                                  " -oBatchMode=yes -oEscapeChar=none"
+                                  " %s 'cat %s'", src_node, src_image)
+
+  comprcmd = "gunzip"
+  impcmd = utils.BuildShellCmd("(cd %s; %s -i %s -b %s -s %s &>%s)",
+                               inst_os.path, import_script, instance.name,
+                               real_os_dev.dev_path, real_swap_dev.dev_path,
+                               logfile)
+
+  command = '|'.join([remotecmd, comprcmd, impcmd])
+
+  result = utils.RunCmd(command)
+
+  if result.failed:
+    logger.Error("os import command '%s' returned error: %s"
+                 " output: %s" %
+                 (command, result.fail_reason, result.output))
+    return False
+
+  return True
+
+
+def ListExports():
+  """Return a list of exports currently available on this machine.
+  """
+  if os.path.isdir(constants.EXPORT_DIR):
+    return os.listdir(constants.EXPORT_DIR)
+  else:
+    return []
+
+
+def RemoveExport(export):
+  """Remove an existing export from the node.
+
+  Args:
+    export: the name of the export to remove
+
+  Returns:
+    False in case of error, True otherwise.
+  """
+
+  target = os.path.join(constants.EXPORT_DIR, export)
+
+  shutil.rmtree(target)
+  # TODO: catch some of the relevant exceptions and provide a pretty
+  # error message if rmtree fails.
+
+  return True
+
+
+class HooksRunner(object):
+  """Hook runner.
+
+  This class is instantiated on the node side (ganeti-noded) and not on
+  the master side.
+
+  """
+  RE_MASK = re.compile("^[a-zA-Z0-9_-]+$")
+
+  def __init__(self, hooks_base_dir=None):
+    """Constructor for hooks runner.
+
+    Args:
+      - hooks_base_dir: if not None, this overrides the
+        constants.HOOKS_BASE_DIR (useful for unittests)
+      - logs_base_dir: if not None, this overrides the
+        constants.LOG_HOOKS_DIR (useful for unittests)
+      - logging: enable or disable logging of script output
+
+    """
+    if hooks_base_dir is None:
+      hooks_base_dir = constants.HOOKS_BASE_DIR
+    self._BASE_DIR = hooks_base_dir
+
+  @staticmethod
+  def ExecHook(script, env):
+    """Exec one hook script.
+
+    Args:
+     - phase: the phase
+     - script: the full path to the script
+     - env: the environment with which to exec the script
+
+    """
+    # exec the process using subprocess and log the output
+    fdstdin = None
+    try:
+      fdstdin = open("/dev/null", "r")
+      child = subprocess.Popen([script], stdin=fdstdin, stdout=subprocess.PIPE,
+                               stderr=subprocess.STDOUT, close_fds=True,
+                               shell=False, cwd="/",env=env)
+      output = ""
+      try:
+        output = child.stdout.read(4096)
+        child.stdout.close()
+      except EnvironmentError, err:
+        output += "Hook script error: %s" % str(err)
+
+      while True:
+        try:
+          result = child.wait()
+          break
+        except EnvironmentError, err:
+          if err.errno == errno.EINTR:
+            continue
+          raise
+    finally:
+      # try not to leak fds
+      for fd in (fdstdin, ):
+        if fd is not None:
+          try:
+            fd.close()
+          except EnvironmentError, err:
+            # just log the error
+            #logger.Error("While closing fd %s: %s" % (fd, err))
+            pass
+
+    return result == 0, output
+
+  def RunHooks(self, hpath, phase, env):
+    """Run the scripts in the hooks directory.
+
+    This method will not be usually overriden by child opcodes.
+
+    """
+    if phase == constants.HOOKS_PHASE_PRE:
+      suffix = "pre"
+    elif phase == constants.HOOKS_PHASE_POST:
+      suffix = "post"
+    else:
+      raise errors.ProgrammerError, ("Unknown hooks phase: '%s'" % phase)
+    rr = []
+
+    subdir = "%s-%s.d" % (hpath, suffix)
+    dir_name = "%s/%s" % (self._BASE_DIR, subdir)
+    try:
+      dir_contents = os.listdir(dir_name)
+    except OSError, err:
+      # must log
+      return rr
+
+    # we use the standard python sort order,
+    # so 00name is the recommended naming scheme
+    dir_contents.sort()
+    for relname in dir_contents:
+      fname = os.path.join(dir_name, relname)
+      if not (os.path.isfile(fname) and os.access(fname, os.X_OK) and
+          self.RE_MASK.match(relname) is not None):
+        rrval = constants.HKR_SKIP
+        output = ""
+      else:
+        result, output = self.ExecHook(fname, env)
+        if not result:
+          rrval = constants.HKR_FAIL
+        else:
+          rrval = constants.HKR_SUCCESS
+      rr.append(("%s/%s" % (subdir, relname), rrval, output))
+
+    return rr
diff --git a/lib/bdev.py b/lib/bdev.py
new file mode 100644
index 0000000000000000000000000000000000000000..d3ff77cff09f22f5db70b51aed2b2f702dd73943
--- /dev/null
+++ b/lib/bdev.py
@@ -0,0 +1,1492 @@
+#!/usr/bin/python
+#
+
+# Copyright (C) 2006, 2007 Google Inc.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful, but
+# WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+# General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
+# 02110-1301, USA.
+
+
+"""Block device abstraction"""
+
+import re
+import time
+import errno
+
+from ganeti import utils
+from ganeti import logger
+from ganeti import errors
+
+
+class BlockDev(object):
+  """Block device abstract class.
+
+  A block device can be in the following states:
+    - not existing on the system, and by `Create()` it goes into:
+    - existing but not setup/not active, and by `Assemble()` goes into:
+    - active read-write and by `Open()` it goes into
+    - online (=used, or ready for use)
+
+  A device can also be online but read-only, however we are not using
+  the readonly state (MD and LV have it, if needed in the future)
+  and we are usually looking at this like at a stack, so it's easier
+  to conceptualise the transition from not-existing to online and back
+  like a linear one.
+
+  The many different states of the device are due to the fact that we
+  need to cover many device types:
+    - logical volumes are created, lvchange -a y $lv, and used
+    - md arrays are created or assembled and used
+    - drbd devices are attached to a local disk/remote peer and made primary
+
+  The status of the device can be examined by `GetStatus()`, which
+  returns a numerical value, depending on the position in the
+  transition stack of the device.
+
+  A block device is identified by three items:
+    - the /dev path of the device (dynamic)
+    - a unique ID of the device (static)
+    - it's major/minor pair (dynamic)
+
+  Not all devices implement both the first two as distinct items. LVM
+  logical volumes have their unique ID (the pair volume group, logical
+  volume name) in a 1-to-1 relation to the dev path. For MD devices,
+  the /dev path is dynamic and the unique ID is the UUID generated at
+  array creation plus the slave list. For DRBD devices, the /dev path
+  is again dynamic and the unique id is the pair (host1, dev1),
+  (host2, dev2).
+
+  You can get to a device in two ways:
+    - creating the (real) device, which returns you
+      an attached instance (lvcreate, mdadm --create)
+    - attaching of a python instance to an existing (real) device
+
+  The second point, the attachement to a device, is different
+  depending on whether the device is assembled or not. At init() time,
+  we search for a device with the same unique_id as us. If found,
+  good. It also means that the device is already assembled. If not,
+  after assembly we'll have our correct major/minor.
+
+  """
+
+  STATUS_UNKNOWN = 0
+  STATUS_EXISTING = 1
+  STATUS_STANDBY = 2
+  STATUS_ONLINE = 3
+
+  STATUS_MAP = {
+    STATUS_UNKNOWN: "unknown",
+    STATUS_EXISTING: "existing",
+    STATUS_STANDBY: "ready for use",
+    STATUS_ONLINE: "online",
+    }
+
+
+  def __init__(self, unique_id, children):
+    self._children = children
+    self.dev_path = None
+    self.unique_id = unique_id
+    self.major = None
+    self.minor = None
+
+
+  def Assemble(self):
+    """Assemble the device from its components.
+
+    If this is a plain block device (e.g. LVM) than assemble does
+    nothing, as the LVM has no children and we don't put logical
+    volumes offline.
+
+    One guarantee is that after the device has been assembled, it
+    knows its major/minor numbers. This allows other devices (usually
+    parents) to probe correctly for their children.
+
+    """
+    status = True
+    for child in self._children:
+      if not isinstance(child, BlockDev):
+        raise TypeError("Invalid child passed of type '%s'" % type(child))
+      if not status:
+        break
+      status = status and child.Assemble()
+      if not status:
+        break
+      status = status and child.Open()
+
+    if not status:
+      for child in self._children:
+        child.Shutdown()
+    return status
+
+
+  def Attach(self):
+    """Find a device which matches our config and attach to it.
+
+    """
+    raise NotImplementedError
+
+
+  def Close(self):
+    """Notifies that the device will no longer be used for I/O.
+
+    """
+    raise NotImplementedError
+
+
+  @classmethod
+  def Create(cls, unique_id, children, size):
+    """Create the device.
+
+    If the device cannot be created, it will return None
+    instead. Error messages go to the logging system.
+
+    Note that for some devices, the unique_id is used, and for other,
+    the children. The idea is that these two, taken together, are
+    enough for both creation and assembly (later).
+
+    """
+    raise NotImplementedError
+
+
+  def Remove(self):
+    """Remove this device.
+
+    This makes sense only for some of the device types: LV and to a
+    lesser degree, md devices. Also note that if the device can't
+    attach, the removal can't be completed.
+
+    """
+    raise NotImplementedError
+
+
+  def GetStatus(self):
+    """Return the status of the device.
+
+    """
+    raise NotImplementedError
+
+
+  def Open(self, force=False):
+    """Make the device ready for use.
+
+    This makes the device ready for I/O. For now, just the DRBD
+    devices need this.
+
+    The force parameter signifies that if the device has any kind of
+    --force thing, it should be used, we know what we are doing.
+
+    """
+    raise NotImplementedError
+
+
+  def Shutdown(self):
+    """Shut down the device, freeing its children.
+
+    This undoes the `Assemble()` work, except for the child
+    assembling; as such, the children on the device are still
+    assembled after this call.
+
+    """
+    raise NotImplementedError
+
+
+  def SetSyncSpeed(self, speed):
+    """Adjust the sync speed of the mirror.
+
+    In case this is not a mirroring device, this is no-op.
+
+    """
+    result = True
+    if self._children:
+      for child in self._children:
+        result = result and child.SetSyncSpeed(speed)
+    return result
+
+
+  def GetSyncStatus(self):
+    """Returns the sync status of the device.
+
+    If this device is a mirroring device, this function returns the
+    status of the mirror.
+
+    Returns:
+     (sync_percent, estimated_time, is_degraded)
+
+    If sync_percent is None, it means all is ok
+    If estimated_time is None, it means we can't estimate
+    the time needed, otherwise it's the time left in seconds
+    If is_degraded is True, it means the device is missing
+    redundancy. This is usually a sign that something went wrong in
+    the device setup, if sync_percent is None.
+
+    """
+    return None, None, False
+
+
+  def CombinedSyncStatus(self):
+    """Calculate the mirror status recursively for our children.
+
+    The return value is the same as for `GetSyncStatus()` except the
+    minimum percent and maximum time are calculated across our
+    children.
+
+    """
+    min_percent, max_time, is_degraded = self.GetSyncStatus()
+    if self._children:
+      for child in self._children:
+        c_percent, c_time, c_degraded = child.GetSyncStatus()
+        if min_percent is None:
+          min_percent = c_percent
+        elif c_percent is not None:
+          min_percent = min(min_percent, c_percent)
+        if max_time is None:
+          max_time = c_time
+        elif c_time is not None:
+          max_time = max(max_time, c_time)
+        is_degraded = is_degraded or c_degraded
+    return min_percent, max_time, is_degraded
+
+
+  def __repr__(self):
+    return ("<%s: unique_id: %s, children: %s, %s:%s, %s>" %
+            (self.__class__, self.unique_id, self._children,
+             self.major, self.minor, self.dev_path))
+
+
+class LogicalVolume(BlockDev):
+  """Logical Volume block device.
+
+  """
+  def __init__(self, unique_id, children):
+    """Attaches to a LV device.
+
+    The unique_id is a tuple (vg_name, lv_name)
+
+    """
+    super(LogicalVolume, self).__init__(unique_id, children)
+    if not isinstance(unique_id, (tuple, list)) or len(unique_id) != 2:
+      raise ValueError("Invalid configuration data %s" % str(unique_id))
+    self._vg_name, self._lv_name = unique_id
+    self.dev_path = "/dev/%s/%s" % (self._vg_name, self._lv_name)
+    self.Attach()
+
+
+  @classmethod
+  def Create(cls, unique_id, children, size):
+    """Create a new logical volume.
+
+    """
+    if not isinstance(unique_id, (tuple, list)) or len(unique_id) != 2:
+      raise ValueError("Invalid configuration data %s" % str(unique_id))
+    vg_name, lv_name = unique_id
+    pvs_info = cls.GetPVInfo(vg_name)
+    if not pvs_info:
+      raise errors.BlockDeviceError, ("Can't compute PV info for vg %s" %
+                                      vg_name)
+    pvs_info.sort()
+    pvs_info.reverse()
+    free_size, pv_name = pvs_info[0]
+    if free_size < size:
+      raise errors.BlockDeviceError, ("Not enough free space: required %s,"
+                                      " available %s" % (size, free_size))
+    result = utils.RunCmd(["lvcreate", "-L%dm" % size, "-n%s" % lv_name,
+                           vg_name, pv_name])
+    if result.failed:
+      raise errors.BlockDeviceError(result.fail_reason)
+    return LogicalVolume(unique_id, children)
+
+  @staticmethod
+  def GetPVInfo(vg_name):
+    """Get the free space info for PVs in a volume group.
+
+    Args:
+      vg_name: the volume group name
+
+    Returns:
+      list of (free_space, name) with free_space in mebibytes
+    """
+    command = ["pvs", "--noheadings", "--nosuffix", "--units=m",
+               "-opv_name,vg_name,pv_free,pv_attr", "--unbuffered",
+               "--separator=:"]
+    result = utils.RunCmd(command)
+    if result.failed:
+      logger.Error("Can't get the PV information: %s" % result.fail_reason)
+      return None
+    data = []
+    for line in result.stdout.splitlines():
+      fields = line.strip().split(':')
+      if len(fields) != 4:
+        logger.Error("Can't parse pvs output: line '%s'" % line)
+        return None
+      # skip over pvs from another vg or ones which are not allocatable
+      if fields[1] != vg_name or fields[3][0] != 'a':
+        continue
+      data.append((float(fields[2]), fields[0]))
+
+    return data
+
+  def Remove(self):
+    """Remove this logical volume.
+
+    """
+    if not self.minor and not self.Attach():
+      # the LV does not exist
+      return True
+    result = utils.RunCmd(["lvremove", "-f", "%s/%s" %
+                           (self._vg_name, self._lv_name)])
+    if result.failed:
+      logger.Error("Can't lvremove: %s" % result.fail_reason)
+
+    return not result.failed
+
+
+  def Attach(self):
+    """Attach to an existing LV.
+
+    This method will try to see if an existing and active LV exists
+    which matches the our name. If so, its major/minor will be
+    recorded.
+
+    """
+    result = utils.RunCmd(["lvdisplay", self.dev_path])
+    if result.failed:
+      logger.Error("Can't find LV %s: %s" %
+                   (self.dev_path, result.fail_reason))
+      return False
+    match = re.compile("^ *Block device *([0-9]+):([0-9]+).*$")
+    for line in result.stdout.splitlines():
+      match_result = match.match(line)
+      if match_result:
+        self.major = int(match_result.group(1))
+        self.minor = int(match_result.group(2))
+        return True
+    return False
+
+
+  def Assemble(self):
+    """Assemble the device.
+
+    This is a no-op for the LV device type. Eventually, we could
+    lvchange -ay here if we see that the LV is not active.
+
+    """
+    return True
+
+
+  def Shutdown(self):
+    """Shutdown the device.
+
+    This is a no-op for the LV device type, as we don't deactivate the
+    volumes on shutdown.
+
+    """
+    return True
+
+
+  def GetStatus(self):
+    """Return the status of the device.
+
+    Logical volumes will can be in all four states, although we don't
+    deactivate (lvchange -an) them when shutdown, so STATUS_EXISTING
+    should not be seen for our devices.
+
+    """
+    result = utils.RunCmd(["lvs", "--noheadings", "-olv_attr", self.dev_path])
+    if result.failed:
+      logger.Error("Can't display lv: %s" % result.fail_reason)
+      return self.STATUS_UNKNOWN
+    out = result.stdout.strip()
+    # format: type/permissions/alloc/fixed_minor/state/open
+    if len(out) != 6:
+      return self.STATUS_UNKNOWN
+    #writable = (out[1] == "w")
+    active = (out[4] == "a")
+    online = (out[5] == "o")
+    if online:
+      retval = self.STATUS_ONLINE
+    elif active:
+      retval = self.STATUS_STANDBY
+    else:
+      retval = self.STATUS_EXISTING
+
+    return retval
+
+
+  def Open(self, force=False):
+    """Make the device ready for I/O.
+
+    This is a no-op for the LV device type.
+
+    """
+    return True
+
+
+  def Close(self):
+    """Notifies that the device will no longer be used for I/O.
+
+    This is a no-op for the LV device type.
+
+    """
+    return True
+
+
+  def Snapshot(self, size):
+    """Create a snapshot copy of an lvm block device.
+
+    """
+
+    snap_name = self._lv_name + ".snap"
+
+    # remove existing snapshot if found
+    snap = LogicalVolume((self._vg_name, snap_name), None)
+    snap.Remove()
+
+    pvs_info = self.GetPVInfo(self._vg_name)
+    if not pvs_info:
+      raise errors.BlockDeviceError, ("Can't compute PV info for vg %s" %
+                                      self._vg_name)
+    pvs_info.sort()
+    pvs_info.reverse()
+    free_size, pv_name = pvs_info[0]
+    if free_size < size:
+      raise errors.BlockDeviceError, ("Not enough free space: required %s,"
+                                      " available %s" % (size, free_size))
+
+    result = utils.RunCmd(["lvcreate", "-L%dm" % size, "-s",
+                           "-n%s" % snap_name, self.dev_path])
+    if result.failed:
+      raise errors.BlockDeviceError, ("command: %s error: %s" %
+                                      (result.cmd, result.fail_reason))
+
+    return snap_name
+
+
+class MDRaid1(BlockDev):
+  """raid1 device implemented via md.
+
+  """
+  def __init__(self, unique_id, children):
+    super(MDRaid1, self).__init__(unique_id, children)
+    self.major = 9
+    self.Attach()
+
+
+  def Attach(self):
+    """Find an array which matches our config and attach to it.
+
+    This tries to find a MD array which has the same UUID as our own.
+
+    """
+    minor = self._FindMDByUUID(self.unique_id)
+    if minor is not None:
+      self._SetFromMinor(minor)
+    else:
+      self.minor = None
+      self.dev_path = None
+
+    return (minor is not None)
+
+
+  @staticmethod
+  def _GetUsedDevs():
+    """Compute the list of in-use MD devices.
+
+    It doesn't matter if the used device have other raid level, just
+    that they are in use.
+
+    """
+    mdstat = open("/proc/mdstat", "r")
+    data = mdstat.readlines()
+    mdstat.close()
+
+    used_md = {}
+    valid_line = re.compile("^md([0-9]+) : .*$")
+    for line in data:
+      match = valid_line.match(line)
+      if match:
+        md_no = int(match.group(1))
+        used_md[md_no] = line
+
+    return used_md
+
+
+  @staticmethod
+  def _GetDevInfo(minor):
+    """Get info about a MD device.
+
+    Currently only uuid is returned.
+
+    """
+    result = utils.RunCmd(["mdadm", "-D", "/dev/md%d" % minor])
+    if result.failed:
+      logger.Error("Can't display md: %s" % result.fail_reason)
+      return None
+    retval = {}
+    for line in result.stdout.splitlines():
+      line = line.strip()
+      kv = line.split(" : ", 1)
+      if kv:
+        if kv[0] == "UUID":
+          retval["uuid"] = kv[1]
+        elif kv[0] == "State":
+          retval["state"] = kv[1].split(", ")
+    return retval
+
+
+  @staticmethod
+  def _FindUnusedMinor():
+    """Compute an unused MD minor.
+
+    This code assumes that there are 256 minors only.
+
+    """
+    used_md = MDRaid1._GetUsedDevs()
+    i = 0
+    while i < 256:
+      if i not in used_md:
+        break
+      i += 1
+    if i == 256:
+      logger.Error("Critical: Out of md minor numbers.")
+      return None
+    return i
+
+
+  @classmethod
+  def _FindMDByUUID(cls, uuid):
+    """Find the minor of an MD array with a given UUID.
+
+    """
+    md_list = cls._GetUsedDevs()
+    for minor in md_list:
+      info = cls._GetDevInfo(minor)
+      if info and info["uuid"] == uuid:
+        return minor
+    return None
+
+
+  @classmethod
+  def Create(cls, unique_id, children, size):
+    """Create a new MD raid1 array.
+
+    """
+    if not isinstance(children, (tuple, list)):
+      raise ValueError("Invalid setup data for MDRaid1 dev: %s" %
+                       str(children))
+    for i in children:
+      if not isinstance(i, BlockDev):
+        raise ValueError("Invalid member in MDRaid1 dev: %s" % type(i))
+    for i in children:
+      result = utils.RunCmd(["mdadm", "--zero-superblock", "--force",
+                             i.dev_path])
+      if result.failed:
+        logger.Error("Can't zero superblock: %s" % result.fail_reason)
+        return None
+    minor = cls._FindUnusedMinor()
+    result = utils.RunCmd(["mdadm", "--create", "/dev/md%d" % minor,
+                           "--auto=yes", "--force", "-l1",
+                           "-n%d" % len(children)] +
+                          [dev.dev_path for dev in children])
+
+    if result.failed:
+      logger.Error("Can't create md: %s" % result.fail_reason)
+      return None
+    info = cls._GetDevInfo(minor)
+    if not info or not "uuid" in info:
+      logger.Error("Wrong information returned from mdadm -D: %s" % str(info))
+      return None
+    return MDRaid1(info["uuid"], children)
+
+
+  def Remove(self):
+    """Stub remove function for MD RAID 1 arrays.
+
+    We don't remove the superblock right now. Mark a to do.
+
+    """
+    #TODO: maybe zero superblock on child devices?
+    return self.Shutdown()
+
+
+  def AddChild(self, device):
+    """Add a new member to the md raid1.
+
+    """
+    if self.minor is None and not self.Attach():
+      raise errors.BlockDeviceError, "Can't attach to device"
+    if device.dev_path is None:
+      raise errors.BlockDeviceError, "New child is not initialised"
+    result = utils.RunCmd(["mdadm", "-a", self.dev_path, device.dev_path])
+    if result.failed:
+      raise errors.BlockDeviceError, ("Failed to add new device to array: %s" %
+                                      result.output)
+    new_len = len(self._children) + 1
+    result = utils.RunCmd(["mdadm", "--grow", self.dev_path, "-n", new_len])
+    if result.failed:
+      raise errors.BlockDeviceError, ("Can't grow md array: %s" %
+                                      result.output)
+    self._children.append(device)
+
+
+  def RemoveChild(self, dev_path):
+    """Remove member from the md raid1.
+
+    """
+    if self.minor is None and not self.Attach():
+      raise errors.BlockDeviceError, "Can't attach to device"
+    if len(self._children) == 1:
+      raise errors.BlockDeviceError, ("Can't reduce member when only one"
+                                      " child left")
+    for device in self._children:
+      if device.dev_path == dev_path:
+        break
+    else:
+      raise errors.BlockDeviceError, "Can't find child with this path"
+    new_len = len(self._children) - 1
+    result = utils.RunCmd(["mdadm", "-f", self.dev_path, dev_path])
+    if result.failed:
+      raise errors.BlockDeviceError, ("Failed to mark device as failed: %s" %
+                                      result.output)
+
+    # it seems here we need a short delay for MD to update its
+    # superblocks
+    time.sleep(0.5)
+    result = utils.RunCmd(["mdadm", "-r", self.dev_path, dev_path])
+    if result.failed:
+      raise errors.BlockDeviceError, ("Failed to remove device from array:"
+                                      " %s" % result.output)
+    result = utils.RunCmd(["mdadm", "--grow", "--force", self.dev_path,
+                           "-n", new_len])
+    if result.failed:
+      raise errors.BlockDeviceError, ("Can't shrink md array: %s" %
+                                      result.output)
+    self._children.remove(device)
+
+
+  def GetStatus(self):
+    """Return the status of the device.
+
+    """
+    self.Attach()
+    if self.minor is None:
+      retval = self.STATUS_UNKNOWN
+    else:
+      retval = self.STATUS_ONLINE
+    return retval
+
+
+  def _SetFromMinor(self, minor):
+    """Set our parameters based on the given minor.
+
+    This sets our minor variable and our dev_path.
+
+    """
+    self.minor = minor
+    self.dev_path = "/dev/md%d" % minor
+
+
+  def Assemble(self):
+    """Assemble the MD device.
+
+    At this point we should have:
+      - list of children devices
+      - uuid
+
+    """
+    result = super(MDRaid1, self).Assemble()
+    if not result:
+      return result
+    md_list = self._GetUsedDevs()
+    for minor in md_list:
+      info = self._GetDevInfo(minor)
+      if info and info["uuid"] == self.unique_id:
+        self._SetFromMinor(minor)
+        logger.Info("MD array %s already started" % str(self))
+        return True
+    free_minor = self._FindUnusedMinor()
+    result = utils.RunCmd(["mdadm", "-A", "--auto=yes", "--uuid",
+                           self.unique_id, "/dev/md%d" % free_minor] +
+                          [bdev.dev_path for bdev in self._children])
+    if result.failed:
+      logger.Error("Can't assemble MD array: %s" % result.fail_reason)
+      self.minor = None
+    else:
+      self.minor = free_minor
+    return not result.failed
+
+
+  def Shutdown(self):
+    """Tear down the MD array.
+
+    This does a 'mdadm --stop' so after this command, the array is no
+    longer available.
+
+    """
+    if self.minor is None and not self.Attach():
+      logger.Info("MD object not attached to a device")
+      return True
+
+    result = utils.RunCmd(["mdadm", "--stop", "/dev/md%d" % self.minor])
+    if result.failed:
+      logger.Error("Can't stop MD array: %s" % result.fail_reason)
+      return False
+    self.minor = None
+    self.dev_path = None
+    return True
+
+
+  def SetSyncSpeed(self, kbytes):
+    """Set the maximum sync speed for the MD array.
+
+    """
+    result = super(MDRaid1, self).SetSyncSpeed(kbytes)
+    if self.minor is None:
+      logger.Error("MD array not attached to a device")
+      return False
+    f = open("/sys/block/md%d/md/sync_speed_max" % self.minor, "w")
+    try:
+      f.write("%d" % kbytes)
+    finally:
+      f.close()
+    f = open("/sys/block/md%d/md/sync_speed_min" % self.minor, "w")
+    try:
+      f.write("%d" % (kbytes/2))
+    finally:
+      f.close()
+    return result
+
+
+  def GetSyncStatus(self):
+    """Returns the sync status of the device.
+
+    Returns:
+     (sync_percent, estimated_time)
+
+    If sync_percent is None, it means all is ok
+    If estimated_time is None, it means we can't esimate
+    the time needed, otherwise it's the time left in seconds
+
+    """
+    if self.minor is None and not self.Attach():
+      raise errors.BlockDeviceError("Can't attach to device in GetSyncStatus")
+    dev_info = self._GetDevInfo(self.minor)
+    is_clean = ("state" in dev_info and
+                len(dev_info["state"]) == 1 and
+                dev_info["state"][0] in ("clean", "active"))
+    sys_path = "/sys/block/md%s/md/" % self.minor
+    f = file(sys_path + "sync_action")
+    sync_status = f.readline().strip()
+    f.close()
+    if sync_status == "idle":
+      return None, None, not is_clean
+    f = file(sys_path + "sync_completed")
+    sync_completed = f.readline().strip().split(" / ")
+    f.close()
+    if len(sync_completed) != 2:
+      return 0, None, not is_clean
+    sync_done, sync_total = [float(i) for i in sync_completed]
+    sync_percent = 100.0*sync_done/sync_total
+    f = file(sys_path + "sync_speed")
+    sync_speed_k = int(f.readline().strip())
+    if sync_speed_k == 0:
+      time_est = None
+    else:
+      time_est = (sync_total - sync_done) / 2 / sync_speed_k
+    return sync_percent, time_est, not is_clean
+
+
+  def Open(self, force=False):
+    """Make the device ready for I/O.
+
+    This is a no-op for the MDRaid1 device type, although we could use
+    the 2.6.18's new array_state thing.
+
+    """
+    return True
+
+
+  def Close(self):
+    """Notifies that the device will no longer be used for I/O.
+
+    This is a no-op for the MDRaid1 device type, but see comment for
+    `Open()`.
+
+    """
+    return True
+
+
+
+class DRBDev(BlockDev):
+  """DRBD block device.
+
+  This implements the local host part of the DRBD device, i.e. it
+  doesn't do anything to the supposed peer. If you need a fully
+  connected DRBD pair, you need to use this class on both hosts.
+
+  The unique_id for the drbd device is the (local_ip, local_port,
+  remote_ip, remote_port) tuple, and it must have two children: the
+  data device and the meta_device. The meta device is checked for
+  valid size and is zeroed on create.
+
+  """
+  _DRBD_MAJOR = 147
+  _ST_UNCONFIGURED = "Unconfigured"
+  _ST_WFCONNECTION = "WFConnection"
+  _ST_CONNECTED = "Connected"
+
+  def __init__(self, unique_id, children):
+    super(DRBDev, self).__init__(unique_id, children)
+    self.major = self._DRBD_MAJOR
+    if len(children) != 2:
+      raise ValueError("Invalid configuration data %s" % str(children))
+    if not isinstance(unique_id, (tuple, list)) or len(unique_id) != 4:
+      raise ValueError("Invalid configuration data %s" % str(unique_id))
+    self._lhost, self._lport, self._rhost, self._rport = unique_id
+    self.Attach()
+
+  @staticmethod
+  def _DevPath(minor):
+    """Return the path to a drbd device for a given minor.
+
+    """
+    return "/dev/drbd%d" % minor
+
+  @staticmethod
+  def _GetProcData():
+    """Return data from /proc/drbd.
+
+    """
+    stat = open("/proc/drbd", "r")
+    data = stat.read().splitlines()
+    stat.close()
+    return data
+
+
+  @classmethod
+  def _GetUsedDevs(cls):
+    """Compute the list of used DRBD devices.
+
+    """
+    data = cls._GetProcData()
+
+    used_devs = {}
+    valid_line = re.compile("^ *([0-9]+): cs:([^ ]+).*$")
+    for line in data:
+      match = valid_line.match(line)
+      if not match:
+        continue
+      minor = int(match.group(1))
+      state = match.group(2)
+      if state == cls._ST_UNCONFIGURED:
+        continue
+      used_devs[minor] = state, line
+
+    return used_devs
+
+
+  @classmethod
+  def _FindUnusedMinor(cls):
+    """Find an unused DRBD device.
+
+    """
+    data = cls._GetProcData()
+
+    valid_line = re.compile("^ *([0-9]+): cs:Unconfigured$")
+    for line in data:
+      match = valid_line.match(line)
+      if match:
+        return int(match.group(1))
+    logger.Error("Error: no free drbd minors!")
+    return None
+
+
+  @classmethod
+  def _GetDevInfo(cls, minor):
+    """Get details about a given DRBD minor.
+
+    This return, if available, the local backing device in (major,
+    minor) formant and the local and remote (ip, port) information.
+
+    """
+    data = {}
+    result = utils.RunCmd(["drbdsetup", cls._DevPath(minor), "show"])
+    if result.failed:
+      logger.Error("Can't display the drbd config: %s" % result.fail_reason)
+      return data
+    out = result.stdout
+    if out == "Not configured\n":
+      return data
+    for line in out.splitlines():
+      if "local_dev" not in data:
+        match = re.match("^Lower device: ([0-9]+):([0-9]+) .*$", line)
+        if match:
+          data["local_dev"] = (int(match.group(1)), int(match.group(2)))
+          continue
+      if "meta_dev" not in data:
+        match = re.match("^Meta device: (([0-9]+):([0-9]+)|internal).*$", line)
+        if match:
+          if match.group(2) is not None and match.group(3) is not None:
+            # matched on the major/minor
+            data["meta_dev"] = (int(match.group(2)), int(match.group(3)))
+          else:
+            # matched on the "internal" string
+            data["meta_dev"] = match.group(1)
+            # in this case, no meta_index is in the output
+            data["meta_index"] = -1
+          continue
+      if "meta_index" not in data:
+        match = re.match("^Meta index: ([0-9]+).*$", line)
+        if match:
+          data["meta_index"] = int(match.group(1))
+          continue
+      if "local_addr" not in data:
+        match = re.match("^Local address: ([0-9.]+):([0-9]+)$", line)
+        if match:
+          data["local_addr"] = (match.group(1), int(match.group(2)))
+          continue
+      if "remote_addr" not in data:
+        match = re.match("^Remote address: ([0-9.]+):([0-9]+)$", line)
+        if match:
+          data["remote_addr"] = (match.group(1), int(match.group(2)))
+          continue
+    return data
+
+
+  def _MatchesLocal(self, info):
+    """Test if our local config matches with an existing device.
+
+    The parameter should be as returned from `_GetDevInfo()`. This
+    method tests if our local backing device is the same as the one in
+    the info parameter, in effect testing if we look like the given
+    device.
+
+    """
+    if not ("local_dev" in info and "meta_dev" in info and
+            "meta_index" in info):
+      return False
+
+    backend = self._children[0]
+    if backend is not None:
+      retval = (info["local_dev"] == (backend.major, backend.minor))
+    else:
+      retval = (info["local_dev"] == (0, 0))
+    meta = self._children[1]
+    if meta is not None:
+      retval = retval and (info["meta_dev"] == (meta.major, meta.minor))
+      retval = retval and (info["meta_index"] == 0)
+    else:
+      retval = retval and (info["meta_dev"] == "internal" and
+                           info["meta_index"] == -1)
+    return retval
+
+
+  def _MatchesNet(self, info):
+    """Test if our network config matches with an existing device.
+
+    The parameter should be as returned from `_GetDevInfo()`. This
+    method tests if our network configuration is the same as the one
+    in the info parameter, in effect testing if we look like the given
+    device.
+
+    """
+    if (((self._lhost is None and not ("local_addr" in info)) and
+         (self._rhost is None and not ("remote_addr" in info)))):
+      return True
+
+    if self._lhost is None:
+      return False
+
+    if not ("local_addr" in info and
+            "remote_addr" in info):
+      return False
+
+    retval = (info["local_addr"] == (self._lhost, self._lport))
+    retval = (retval and
+              info["remote_addr"] == (self._rhost, self._rport))
+    return retval
+
+
+  @staticmethod
+  def _IsValidMeta(meta_device):
+    """Check if the given meta device looks like a valid one.
+
+    This currently only check the size, which must be around
+    128MiB.
+
+    """
+    result = utils.RunCmd(["blockdev", "--getsize", meta_device])
+    if result.failed:
+      logger.Error("Failed to get device size: %s" % result.fail_reason)
+      return False
+    try:
+      sectors = int(result.stdout)
+    except ValueError:
+      logger.Error("Invalid output from blockdev: '%s'" % result.stdout)
+      return False
+    bytes = sectors * 512
+    if bytes < 128*1024*1024: # less than 128MiB
+      logger.Error("Meta device too small (%.2fMib)" % (bytes/1024/1024))
+      return False
+    if bytes > (128+32)*1024*1024: # account for an extra (big) PE on LVM
+      logger.Error("Meta device too big (%.2fMiB)" % (bytes/1024/1024))
+      return False
+    return True
+
+
+  @classmethod
+  def _AssembleLocal(cls, minor, backend, meta):
+    """Configure the local part of a DRBD device.
+
+    This is the first thing that must be done on an unconfigured DRBD
+    device. And it must be done only once.
+
+    """
+    if not cls._IsValidMeta(meta):
+      return False
+    result = utils.RunCmd(["drbdsetup", cls._DevPath(minor), "disk",
+                           backend, meta, "0", "-e", "detach"])
+    if result.failed:
+      logger.Error("Can't attach local disk: %s" % result.output)
+    return not result.failed
+
+
+  @classmethod
+  def _ShutdownLocal(cls, minor):
+    """Detach from the local device.
+
+    I/Os will continue to be served from the remote device. If we
+    don't have a remote device, this operation will fail.
+
+    """
+    result = utils.RunCmd(["drbdsetup", cls._DevPath(minor), "detach"])
+    if result.failed:
+      logger.Error("Can't detach local device: %s" % result.output)
+    return not result.failed
+
+
+  @staticmethod
+  def _ShutdownAll(minor):
+    """Deactivate the device.
+
+    This will, of course, fail if the device is in use.
+
+    """
+    result = utils.RunCmd(["drbdsetup", DRBDev._DevPath(minor), "down"])
+    if result.failed:
+      logger.Error("Can't shutdown drbd device: %s" % result.output)
+    return not result.failed
+
+
+  @classmethod
+  def _AssembleNet(cls, minor, net_info, protocol):
+    """Configure the network part of the device.
+
+    This operation can be, in theory, done multiple times, but there
+    have been cases (in lab testing) in which the network part of the
+    device had become stuck and couldn't be shut down because activity
+    from the new peer (also stuck) triggered a timer re-init and
+    needed remote peer interface shutdown in order to clear. So please
+    don't change online the net config.
+
+    """
+    lhost, lport, rhost, rport = net_info
+    result = utils.RunCmd(["drbdsetup", cls._DevPath(minor), "net",
+                           "%s:%s" % (lhost, lport), "%s:%s" % (rhost, rport),
+                           protocol])
+    if result.failed:
+      logger.Error("Can't setup network for dbrd device: %s" %
+                   result.fail_reason)
+      return False
+
+    timeout = time.time() + 10
+    ok = False
+    while time.time() < timeout:
+      info = cls._GetDevInfo(minor)
+      if not "local_addr" in info or not "remote_addr" in info:
+        time.sleep(1)
+        continue
+      if (info["local_addr"] != (lhost, lport) or
+          info["remote_addr"] != (rhost, rport)):
+        time.sleep(1)
+        continue
+      ok = True
+      break
+    if not ok:
+      logger.Error("Timeout while configuring network")
+      return False
+    return True
+
+
+  @classmethod
+  def _ShutdownNet(cls, minor):
+    """Disconnect from the remote peer.
+
+    This fails if we don't have a local device.
+
+    """
+    result = utils.RunCmd(["drbdsetup", cls._DevPath(minor), "disconnect"])
+    logger.Error("Can't shutdown network: %s" % result.output)
+    return not result.failed
+
+
+  def _SetFromMinor(self, minor):
+    """Set our parameters based on the given minor.
+
+    This sets our minor variable and our dev_path.
+
+    """
+    if minor is None:
+      self.minor = self.dev_path = None
+    else:
+      self.minor = minor
+      self.dev_path = self._DevPath(minor)
+
+
+  def Assemble(self):
+    """Assemble the drbd.
+
+    Method:
+      - if we have a local backing device, we bind to it by:
+        - checking the list of used drbd devices
+        - check if the local minor use of any of them is our own device
+        - if yes, abort?
+        - if not, bind
+      - if we have a local/remote net info:
+        - redo the local backing device step for the remote device
+        - check if any drbd device is using the local port,
+          if yes abort
+        - check if any remote drbd device is using the remote
+          port, if yes abort (for now)
+        - bind our net port
+        - bind the remote net port
+
+    """
+    self.Attach()
+    if self.minor is not None:
+      logger.Info("Already assembled")
+      return True
+
+    result = super(DRBDev, self).Assemble()
+    if not result:
+      return result
+
+    minor = self._FindUnusedMinor()
+    if minor is None:
+      raise errors.BlockDeviceError, "Not enough free minors for DRBD!"
+    need_localdev_teardown = False
+    if self._children[0]:
+      result = self._AssembleLocal(minor, self._children[0].dev_path,
+                                   self._children[1].dev_path)
+      if not result:
+        return False
+      need_localdev_teardown = True
+    if self._lhost and self._lport and self._rhost and self._rport:
+      result = self._AssembleNet(minor,
+                                 (self._lhost, self._lport,
+                                  self._rhost, self._rport),
+                                 "C")
+      if not result:
+        if need_localdev_teardown:
+          # we will ignore failures from this
+          logger.Error("net setup failed, tearing down local device")
+          self._ShutdownAll(minor)
+        return False
+    self._SetFromMinor(minor)
+    return True
+
+
+  def Shutdown(self):
+    """Shutdown the DRBD device.
+
+    """
+    if self.minor is None and not self.Attach():
+      logger.Info("DRBD device not attached to a device during Shutdown")
+      return True
+    if not self._ShutdownAll(self.minor):
+      return False
+    self.minor = None
+    self.dev_path = None
+    return True
+
+
+  def Attach(self):
+    """Find a DRBD device which matches our config and attach to it.
+
+    In case of partially attached (local device matches but no network
+    setup), we perform the network attach. If successful, we re-test
+    the attach if can return success.
+
+    """
+    for minor in self._GetUsedDevs():
+      info = self._GetDevInfo(minor)
+      match_l = self._MatchesLocal(info)
+      match_r = self._MatchesNet(info)
+      if match_l and match_r:
+        break
+      if match_l and not match_r and "local_addr" not in info:
+        res_r = self._AssembleNet(minor,
+                                  (self._lhost, self._lport,
+                                   self._rhost, self._rport),
+                                  "C")
+        if res_r and self._MatchesNet(self._GetDevInfo(minor)):
+          break
+    else:
+      minor = None
+
+    self._SetFromMinor(minor)
+    return minor is not None
+
+
+  def Open(self, force=False):
+    """Make the local state primary.
+
+    If the 'force' parameter is given, the '--do-what-I-say' parameter
+    is given. Since this is a pottentialy dangerous operation, the
+    force flag should be only given after creation, when it actually
+    has to be given.
+
+    """
+    if self.minor is None and not self.Attach():
+      logger.Error("DRBD cannot attach to a device during open")
+      return False
+    cmd = ["drbdsetup", self.dev_path, "primary"]
+    if force:
+      cmd.append("--do-what-I-say")
+    result = utils.RunCmd(cmd)
+    if result.failed:
+      logger.Error("Can't make drbd device primary: %s" % result.output)
+      return False
+    return True
+
+
+  def Close(self):
+    """Make the local state secondary.
+
+    This will, of course, fail if the device is in use.
+
+    """
+    if self.minor is None and not self.Attach():
+      logger.Info("Instance not attached to a device")
+      raise errors.BlockDeviceError("Can't find device")
+    result = utils.RunCmd(["drbdsetup", self.dev_path, "secondary"])
+    if result.failed:
+      logger.Error("Can't switch drbd device to secondary: %s" % result.output)
+      raise errors.BlockDeviceError("Can't switch drbd device to secondary")
+
+
+  def SetSyncSpeed(self, kbytes):
+    """Set the speed of the DRBD syncer.
+
+    """
+    children_result = super(DRBDev, self).SetSyncSpeed(kbytes)
+    if self.minor is None:
+      logger.Info("Instance not attached to a device")
+      return False
+    result = utils.RunCmd(["drbdsetup", self.dev_path, "syncer", "-r", "%d" %
+                           kbytes])
+    if result.failed:
+      logger.Error("Can't change syncer rate: %s " % result.fail_reason)
+    return not result.failed and children_result
+
+
+  def GetSyncStatus(self):
+    """Returns the sync status of the device.
+
+    Returns:
+     (sync_percent, estimated_time)
+
+    If sync_percent is None, it means all is ok
+    If estimated_time is None, it means we can't esimate
+    the time needed, otherwise it's the time left in seconds
+
+    """
+    if self.minor is None and not self.Attach():
+      raise errors.BlockDeviceError("Can't attach to device in GetSyncStatus")
+    proc_info = self._MassageProcData(self._GetProcData())
+    if self.minor not in proc_info:
+      raise errors.BlockDeviceError("Can't find myself in /proc (minor %d)" %
+                                    self.minor)
+    line = proc_info[self.minor]
+    match = re.match("^.*sync'ed: *([0-9.]+)%.*"
+                     " finish: ([0-9]+):([0-9]+):([0-9]+) .*$", line)
+    if match:
+      sync_percent = float(match.group(1))
+      hours = int(match.group(2))
+      minutes = int(match.group(3))
+      seconds = int(match.group(4))
+      est_time = hours * 3600 + minutes * 60 + seconds
+    else:
+      sync_percent = None
+      est_time = None
+    match = re.match("^ *[0-9]+: cs:([^ ]+).*$", line)
+    if not match:
+      raise errors.BlockDeviceError("Can't find my data in /proc (minor %d)" %
+                                    self.minor)
+    client_state = match.group(1)
+    is_degraded = client_state != "Connected"
+    return sync_percent, est_time, is_degraded
+
+
+  @staticmethod
+  def _MassageProcData(data):
+    """Transform the output of _GetProdData into a nicer form.
+
+    Returns:
+      a dictionary of minor: joined lines from /proc/drbd for that minor
+
+    """
+    lmatch = re.compile("^ *([0-9]+):.*$")
+    results = {}
+    old_minor = old_line = None
+    for line in data:
+      lresult = lmatch.match(line)
+      if lresult is not None:
+        if old_minor is not None:
+          results[old_minor] = old_line
+        old_minor = int(lresult.group(1))
+        old_line = line
+      else:
+        if old_minor is not None:
+          old_line += " " + line.strip()
+    # add last line
+    if old_minor is not None:
+      results[old_minor] = old_line
+    return results
+
+
+  def GetStatus(self):
+    """Compute the status of the DRBD device
+
+    Note that DRBD devices don't have the STATUS_EXISTING state.
+
+    """
+    if self.minor is None and not self.Attach():
+      return self.STATUS_UNKNOWN
+
+    data = self._GetProcData()
+    match = re.compile("^ *%d: cs:[^ ]+ st:(Primary|Secondary)/.*$" %
+                       self.minor)
+    for line in data:
+      mresult = match.match(line)
+      if mresult:
+        break
+    else:
+      logger.Error("Can't find myself!")
+      return self.STATUS_UNKNOWN
+
+    state = mresult.group(2)
+    if state == "Primary":
+      result = self.STATUS_ONLINE
+    else:
+      result = self.STATUS_STANDBY
+
+    return result
+
+
+  @staticmethod
+  def _ZeroDevice(device):
+    """Zero a device.
+
+    This writes until we get ENOSPC.
+
+    """
+    f = open(device, "w")
+    buf = "\0" * 1048576
+    try:
+      while True:
+        f.write(buf)
+    except IOError, err:
+      if err.errno != errno.ENOSPC:
+        raise
+
+
+  @classmethod
+  def Create(cls, unique_id, children, size):
+    """Create a new DRBD device.
+
+    Since DRBD devices are not created per se, just assembled, this
+    function just zeroes the meta device.
+
+    """
+    if len(children) != 2:
+      raise errors.ProgrammerError("Invalid setup for the drbd device")
+    meta = children[1]
+    meta.Assemble()
+    if not meta.Attach():
+      raise errors.BlockDeviceError("Can't attach to meta device")
+    if not cls._IsValidMeta(meta.dev_path):
+      raise errors.BlockDeviceError("Invalid meta device")
+    logger.Info("Started zeroing device %s" % meta.dev_path)
+    cls._ZeroDevice(meta.dev_path)
+    logger.Info("Done zeroing device %s" % meta.dev_path)
+    return cls(unique_id, children)
+
+
+  def Remove(self):
+    """Stub remove for DRBD devices.
+
+    """
+    return self.Shutdown()
+
+
+DEV_MAP = {
+  "lvm": LogicalVolume,
+  "md_raid1": MDRaid1,
+  "drbd": DRBDev,
+  }
+
+
+def FindDevice(dev_type, unique_id, children):
+  """Search for an existing, assembled device.
+
+  This will succeed only if the device exists and is assembled, but it
+  does not do any actions in order to activate the device.
+
+  """
+  if dev_type not in DEV_MAP:
+    raise errors.ProgrammerError("Invalid block device type '%s'" % dev_type)
+  device = DEV_MAP[dev_type](unique_id, children)
+  if not device.Attach():
+    return None
+  return  device
+
+
+def AttachOrAssemble(dev_type, unique_id, children):
+  """Try to attach or assemble an existing device.
+
+  This will attach to an existing assembled device or will assemble
+  the device, as needed, to bring it fully up.
+
+  """
+  if dev_type not in DEV_MAP:
+    raise errors.ProgrammerError("Invalid block device type '%s'" % dev_type)
+  device = DEV_MAP[dev_type](unique_id, children)
+  if not device.Attach():
+    device.Assemble()
+  if not device.Attach():
+    raise errors.BlockDeviceError("Can't find a valid block device for"
+                                  " %s/%s/%s" %
+                                  (dev_type, unique_id, children))
+  return device
+
+
+def Create(dev_type, unique_id, children, size):
+  """Create a device.
+
+  """
+  if dev_type not in DEV_MAP:
+    raise errors.ProgrammerError("Invalid block device type '%s'" % dev_type)
+  device = DEV_MAP[dev_type].Create(unique_id, children, size)
+  return device
diff --git a/lib/cli.py b/lib/cli.py
new file mode 100644
index 0000000000000000000000000000000000000000..56ec477c6fb0d00e2c944b59b731dfac6e870170
--- /dev/null
+++ b/lib/cli.py
@@ -0,0 +1,272 @@
+#!/usr/bin/python
+#
+
+# Copyright (C) 2006, 2007 Google Inc.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful, but
+# WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+# General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
+# 02110-1301, USA.
+
+
+"""Module dealing with command line parsing"""
+
+
+import sys
+import textwrap
+import os.path
+import copy
+
+from ganeti import utils
+from ganeti import logger
+from ganeti import errors
+from ganeti import mcpu
+from ganeti import constants
+
+from optparse import (OptionParser, make_option, TitledHelpFormatter,
+                      Option, OptionValueError, SUPPRESS_HELP)
+
+__all__ = ["DEBUG_OPT", "NOHDR_OPT", "SEP_OPT", "GenericMain", "SubmitOpCode",
+           "cli_option",
+           "ARGS_NONE", "ARGS_FIXED", "ARGS_ATLEAST", "ARGS_ANY", "ARGS_ONE",
+           "USEUNITS_OPT"]
+
+DEBUG_OPT = make_option("-d", "--debug", default=False,
+                        action="store_true",
+                        help="Turn debugging on")
+
+NOHDR_OPT = make_option("--no-headers", default=False,
+                        action="store_true", dest="no_headers",
+                        help="Don't display column headers")
+
+SEP_OPT = make_option("--separator", default=" ",
+                      action="store", dest="separator",
+                      help="Separator between output fields"
+                      " (defaults to one space)")
+
+USEUNITS_OPT = make_option("--human-readable", default=False,
+                           action="store_true", dest="human_readable",
+                           help="Print sizes in human readable format")
+
+_LOCK_OPT = make_option("--lock-retries", default=None,
+                        type="int", help=SUPPRESS_HELP)
+
+
+def ARGS_FIXED(val):
+  """Macro-like function denoting a fixed number of arguments"""
+  return -val
+
+
+def ARGS_ATLEAST(val):
+  """Macro-like function denoting a minimum number of arguments"""
+  return val
+
+
+ARGS_NONE = None
+ARGS_ONE = ARGS_FIXED(1)
+ARGS_ANY = ARGS_ATLEAST(0)
+
+
+def check_unit(option, opt, value):
+  try:
+    return utils.ParseUnit(value)
+  except errors.UnitParseError, err:
+    raise OptionValueError, ("option %s: %s" % (opt, err))
+
+
+class CliOption(Option):
+  TYPES = Option.TYPES + ("unit",)
+  TYPE_CHECKER = copy.copy(Option.TYPE_CHECKER)
+  TYPE_CHECKER["unit"] = check_unit
+
+
+# optparse.py sets make_option, so we do it for our own option class, too
+cli_option = CliOption
+
+
+def _ParseArgs(argv, commands):
+  """Parses the command line and return the function which must be
+  executed together with its arguments
+
+  Arguments:
+    argv: the command line
+
+    commands: dictionary with special contents, see the design doc for
+    cmdline handling
+  """
+  if len(argv) == 0:
+    binary = "<command>"
+  else:
+    binary = argv[0].split("/")[-1]
+
+  if len(argv) > 1 and argv[1] == "--version":
+    print "%s (ganeti) %s" % (binary, constants.RELEASE_VERSION)
+    # Quit right away. That way we don't have to care about this special
+    # argument. optparse.py does it the same.
+    sys.exit(0)
+
+  if len(argv) < 2 or argv[1] not in commands.keys():
+    # let's do a nice thing
+    sortedcmds = commands.keys()
+    sortedcmds.sort()
+    print ("Usage: %(bin)s {command} [options...] [argument...]"
+           "\n%(bin)s <command> --help to see details, or"
+           " man %(bin)s\n" % {"bin": binary})
+    # compute the max line length for cmd + usage
+    mlen = max([len(" %s %s" % (cmd, commands[cmd][3])) for cmd in commands])
+    mlen = min(60, mlen) # should not get here...
+    # and format a nice command list
+    print "Commands:"
+    for cmd in sortedcmds:
+      cmdstr = " %s %s" % (cmd, commands[cmd][3])
+      help_text = commands[cmd][4]
+      help_lines = textwrap.wrap(help_text, 79-3-mlen)
+      print "%-*s - %s" % (mlen, cmdstr,
+                                          help_lines.pop(0))
+      for line in help_lines:
+        print "%-*s   %s" % (mlen, "", line)
+    print
+    return None, None, None
+  cmd = argv.pop(1)
+  func, nargs, parser_opts, usage, description = commands[cmd]
+  parser_opts.append(_LOCK_OPT)
+  parser = OptionParser(option_list=parser_opts,
+                        description=description,
+                        formatter=TitledHelpFormatter(),
+                        usage="%%prog %s %s" % (cmd, usage))
+  parser.disable_interspersed_args()
+  options, args = parser.parse_args()
+  if nargs is None:
+    if len(args) != 0:
+      print >> sys.stderr, ("Error: Command %s expects no arguments" % cmd)
+      return None, None, None
+  elif nargs < 0 and len(args) != -nargs:
+    print >> sys.stderr, ("Error: Command %s expects %d argument(s)" %
+                         (cmd, -nargs))
+    return None, None, None
+  elif nargs >= 0 and len(args) < nargs:
+    print >> sys.stderr, ("Error: Command %s expects at least %d argument(s)" %
+                         (cmd, nargs))
+    return None, None, None
+
+  return func, options, args
+
+
+def _AskUser(text):
+  """Ask the user a yes/no question.
+
+  Args:
+    questionstring - the question to ask.
+
+  Returns:
+    True or False depending on answer (No for False is default).
+
+  """
+  try:
+    f = file("/dev/tty", "r+")
+  except IOError:
+    return False
+  answer = False
+  try:
+    f.write(textwrap.fill(text))
+    f.write('\n')
+    f.write("y/[n]: ")
+    line = f.readline(16).strip().lower()
+    answer = line in ('y', 'yes')
+  finally:
+    f.close()
+  return answer
+
+
+def SubmitOpCode(op):
+  """Function to submit an opcode.
+
+  This is just a simple wrapper over the construction of the processor
+  instance. It should be extended to better handle feedback and
+  interaction functions.
+
+  """
+  proc = mcpu.Processor()
+  return proc.ExecOpCode(op, logger.ToStdout)
+
+
+def GenericMain(commands):
+  """Generic main function for all the gnt-* commands.
+
+  Argument: a dictionary with a special structure, see the design doc
+  for command line handling.
+
+  """
+  # save the program name and the entire command line for later logging
+  if sys.argv:
+    binary = os.path.basename(sys.argv[0]) or sys.argv[0]
+    if len(sys.argv) >= 2:
+      binary += " " + sys.argv[1]
+      old_cmdline = " ".join(sys.argv[2:])
+    else:
+      old_cmdline = ""
+  else:
+    binary = "<unknown program>"
+    old_cmdline = ""
+
+  func, options, args = _ParseArgs(sys.argv, commands)
+  if func is None: # parse error
+    return 1
+
+  options._ask_user = _AskUser
+
+  logger.SetupLogging(debug=options.debug, program=binary)
+
+  try:
+    utils.Lock('cmd', max_retries=options.lock_retries, debug=options.debug)
+  except errors.LockError, err:
+    logger.ToStderr(str(err))
+    return 1
+
+  if old_cmdline:
+    logger.Info("run with arguments '%s'" % old_cmdline)
+  else:
+    logger.Info("run with no arguments")
+
+  try:
+    try:
+      result = func(options, args)
+    except errors.ConfigurationError, err:
+      logger.Error("Corrupt configuration file: %s" % err)
+      logger.ToStderr("Aborting.")
+      result = 2
+    except errors.HooksAbort, err:
+      logger.ToStderr("Failure: hooks execution failed:")
+      for node, script, out in err.args[0]:
+        if out:
+          logger.ToStderr("  node: %s, script: %s, output: %s" %
+                          (node, script, out))
+        else:
+          logger.ToStderr("  node: %s, script: %s (no output)" %
+                          (node, script))
+      result = 1
+    except errors.HooksFailure, err:
+      logger.ToStderr("Failure: hooks general failure: %s" % str(err))
+      result = 1
+    except errors.OpPrereqError, err:
+      logger.ToStderr("Failure: prerequisites not met for this"
+                      " operation:\n%s" % str(err))
+      result = 1
+    except errors.OpExecError, err:
+      logger.ToStderr("Failure: command execution error:\n%s" % str(err))
+      result = 1
+  finally:
+    utils.Unlock('cmd')
+    utils.LockCleanup()
+
+  return result
diff --git a/lib/cmdlib.py b/lib/cmdlib.py
new file mode 100644
index 0000000000000000000000000000000000000000..f6d2b17ac6b155bc5302ba00965c3886f3d3be47
--- /dev/null
+++ b/lib/cmdlib.py
@@ -0,0 +1,3347 @@
+#!/usr/bin/python
+#
+
+# Copyright (C) 2006, 2007 Google Inc.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful, but
+# WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+# General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
+# 02110-1301, USA.
+
+
+"""Module implementing the commands used by gnt-* programs."""
+
+# pylint: disable-msg=W0613,W0201
+
+import os
+import os.path
+import sha
+import socket
+import time
+import tempfile
+import re
+import platform
+
+from ganeti import rpc
+from ganeti import ssh
+from ganeti import logger
+from ganeti import utils
+from ganeti import errors
+from ganeti import hypervisor
+from ganeti import config
+from ganeti import constants
+from ganeti import objects
+from ganeti import opcodes
+from ganeti import ssconf
+
+class LogicalUnit(object):
+  """Logical Unit base class..
+
+  Subclasses must follow these rules:
+    - implement CheckPrereq which also fills in the opcode instance
+      with all the fields (even if as None)
+    - implement Exec
+    - implement BuildHooksEnv
+    - redefine HPATH and HTYPE
+    - optionally redefine their run requirements (REQ_CLUSTER,
+      REQ_MASTER); note that all commands require root permissions
+
+  """
+  HPATH = None
+  HTYPE = None
+  _OP_REQP = []
+  REQ_CLUSTER = True
+  REQ_MASTER = True
+
+  def __init__(self, processor, op, cfg, sstore):
+    """Constructor for LogicalUnit.
+
+    This needs to be overriden in derived classes in order to check op
+    validity.
+
+    """
+    self.processor = processor
+    self.op = op
+    self.cfg = cfg
+    self.sstore = sstore
+    for attr_name in self._OP_REQP:
+      attr_val = getattr(op, attr_name, None)
+      if attr_val is None:
+        raise errors.OpPrereqError, ("Required parameter '%s' missing" %
+                                     attr_name)
+    if self.REQ_CLUSTER:
+      if not cfg.IsCluster():
+        raise errors.OpPrereqError, ("Cluster not initialized yet,"
+                                     " use 'gnt-cluster init' first.")
+      if self.REQ_MASTER:
+        master = cfg.GetMaster()
+        if master != socket.gethostname():
+          raise errors.OpPrereqError, ("Commands must be run on the master"
+                                       " node %s" % master)
+
+  def CheckPrereq(self):
+    """Check prerequisites for this LU.
+
+    This method should check that the prerequisites for the execution
+    of this LU are fulfilled. It can do internode communication, but
+    it should be idempotent - no cluster or system changes are
+    allowed.
+
+    The method should raise errors.OpPrereqError in case something is
+    not fulfilled. Its return value is ignored.
+
+    This method should also update all the parameters of the opcode to
+    their canonical form; e.g. a short node name must be fully
+    expanded after this method has successfully completed (so that
+    hooks, logging, etc. work correctly).
+
+    """
+    raise NotImplementedError
+
+  def Exec(self, feedback_fn):
+    """Execute the LU.
+
+    This method should implement the actual work. It should raise
+    errors.OpExecError for failures that are somewhat dealt with in
+    code, or expected.
+
+    """
+    raise NotImplementedError
+
+  def BuildHooksEnv(self):
+    """Build hooks environment for this LU.
+
+    This method should return a three-node tuple consisting of: a dict
+    containing the environment that will be used for running the
+    specific hook for this LU, a list of node names on which the hook
+    should run before the execution, and a list of node names on which
+    the hook should run after the execution.
+
+    The keys of the dict must not have 'GANETI_' prefixed as this will
+    be handled in the hooks runner. Also note additional keys will be
+    added by the hooks runner. If the LU doesn't define any
+    environment, an empty dict (and not None) should be returned.
+
+    As for the node lists, the master should not be included in the
+    them, as it will be added by the hooks runner in case this LU
+    requires a cluster to run on (otherwise we don't have a node
+    list). No nodes should be returned as an empty list (and not
+    None).
+
+    Note that if the HPATH for a LU class is None, this function will
+    not be called.
+
+    """
+    raise NotImplementedError
+
+
+class NoHooksLU(LogicalUnit):
+  """Simple LU which runs no hooks.
+
+  This LU is intended as a parent for other LogicalUnits which will
+  run no hooks, in order to reduce duplicate code.
+
+  """
+  HPATH = None
+  HTYPE = None
+
+  def BuildHooksEnv(self):
+    """Build hooks env.
+
+    This is a no-op, since we don't run hooks.
+
+    """
+    return
+
+
+def _UpdateEtcHosts(fullnode, ip):
+  """Ensure a node has a correct entry in /etc/hosts.
+
+  Args:
+    fullnode - Fully qualified domain name of host. (str)
+    ip       - IPv4 address of host (str)
+
+  """
+  node = fullnode.split(".", 1)[0]
+
+  f = open('/etc/hosts', 'r+')
+
+  inthere = False
+
+  save_lines = []
+  add_lines = []
+  removed = False
+
+  while True:
+    rawline = f.readline()
+
+    if not rawline:
+      # End of file
+      break
+
+    line = rawline.split('\n')[0]
+
+    # Strip off comments
+    line = line.split('#')[0]
+
+    if not line:
+      # Entire line was comment, skip
+      save_lines.append(rawline)
+      continue
+
+    fields = line.split()
+
+    haveall = True
+    havesome = False
+    for spec in [ ip, fullnode, node ]:
+      if spec not in fields:
+        haveall = False
+      if spec in fields:
+        havesome = True
+
+    if haveall:
+      inthere = True
+      save_lines.append(rawline)
+      continue
+
+    if havesome and not haveall:
+      # Line (old, or manual?) which is missing some.  Remove.
+      removed = True
+      continue
+
+    save_lines.append(rawline)
+
+  if not inthere:
+    add_lines.append('%s\t%s %s\n' % (ip, fullnode, node))
+
+  if removed:
+    if add_lines:
+      save_lines = save_lines + add_lines
+
+    # We removed a line, write a new file and replace old.
+    fd, tmpname = tempfile.mkstemp('tmp', 'hosts_', '/etc')
+    newfile = os.fdopen(fd, 'w')
+    newfile.write(''.join(save_lines))
+    newfile.close()
+    os.rename(tmpname, '/etc/hosts')
+
+  elif add_lines:
+    # Simply appending a new line will do the trick.
+    f.seek(0, 2)
+    for add in add_lines:
+      f.write(add)
+
+  f.close()
+
+
+def _UpdateKnownHosts(fullnode, ip, pubkey):
+  """Ensure a node has a correct known_hosts entry.
+
+  Args:
+    fullnode - Fully qualified domain name of host. (str)
+    ip       - IPv4 address of host (str)
+    pubkey   - the public key of the cluster
+
+  """
+  if os.path.exists('/etc/ssh/ssh_known_hosts'):
+    f = open('/etc/ssh/ssh_known_hosts', 'r+')
+  else:
+    f = open('/etc/ssh/ssh_known_hosts', 'w+')
+
+  inthere = False
+
+  save_lines = []
+  add_lines = []
+  removed = False
+
+  while True:
+    rawline = f.readline()
+    logger.Debug('read %s' % (repr(rawline),))
+
+    if not rawline:
+      # End of file
+      break
+
+    line = rawline.split('\n')[0]
+
+    parts = line.split(' ')
+    fields = parts[0].split(',')
+    key = parts[2]
+
+    haveall = True
+    havesome = False
+    for spec in [ ip, fullnode ]:
+      if spec not in fields:
+        haveall = False
+      if spec in fields:
+        havesome = True
+
+    logger.Debug("key, pubkey = %s." % (repr((key, pubkey)),))
+    if haveall and key == pubkey:
+      inthere = True
+      save_lines.append(rawline)
+      logger.Debug("Keeping known_hosts '%s'." % (repr(rawline),))
+      continue
+
+    if havesome and (not haveall or key != pubkey):
+      removed = True
+      logger.Debug("Discarding known_hosts '%s'." % (repr(rawline),))
+      continue
+
+    save_lines.append(rawline)
+
+  if not inthere:
+    add_lines.append('%s,%s ssh-rsa %s\n' % (fullnode, ip, pubkey))
+    logger.Debug("Adding known_hosts '%s'." % (repr(add_lines[-1]),))
+
+  if removed:
+    save_lines = save_lines + add_lines
+
+    # Write a new file and replace old.
+    fd, tmpname = tempfile.mkstemp('tmp', 'ssh_known_hosts_', '/etc/ssh')
+    newfile = os.fdopen(fd, 'w')
+    newfile.write(''.join(save_lines))
+    newfile.close()
+    logger.Debug("Wrote new known_hosts.")
+    os.rename(tmpname, '/etc/ssh/ssh_known_hosts')
+
+  elif add_lines:
+    # Simply appending a new line will do the trick.
+    f.seek(0, 2)
+    for add in add_lines:
+      f.write(add)
+
+  f.close()
+
+
+def _HasValidVG(vglist, vgname):
+  """Checks if the volume group list is valid.
+
+  A non-None return value means there's an error, and the return value
+  is the error message.
+
+  """
+  vgsize = vglist.get(vgname, None)
+  if vgsize is None:
+    return "volume group '%s' missing" % vgname
+  elif vgsize < 20480:
+    return ("volume group '%s' too small (20480MiB required, %dMib found" %
+            vgname, vgsize)
+  return None
+
+
+def _InitSSHSetup(node):
+  """Setup the SSH configuration for the cluster.
+
+
+  This generates a dsa keypair for root, adds the pub key to the
+  permitted hosts and adds the hostkey to its own known hosts.
+
+  Args:
+    node: the name of this host as a fqdn
+
+  """
+  utils.RemoveFile('/root/.ssh/known_hosts')
+
+  if os.path.exists('/root/.ssh/id_dsa'):
+    utils.CreateBackup('/root/.ssh/id_dsa')
+  if os.path.exists('/root/.ssh/id_dsa.pub'):
+    utils.CreateBackup('/root/.ssh/id_dsa.pub')
+
+  utils.RemoveFile('/root/.ssh/id_dsa')
+  utils.RemoveFile('/root/.ssh/id_dsa.pub')
+
+  result = utils.RunCmd(["ssh-keygen", "-t", "dsa",
+                         "-f", "/root/.ssh/id_dsa",
+                         "-q", "-N", ""])
+  if result.failed:
+    raise errors.OpExecError, ("could not generate ssh keypair, error %s" %
+                               result.output)
+
+  f = open('/root/.ssh/id_dsa.pub', 'r')
+  try:
+    utils.AddAuthorizedKey('/root/.ssh/authorized_keys', f.read(8192))
+  finally:
+    f.close()
+
+
+def _InitGanetiServerSetup(ss):
+  """Setup the necessary configuration for the initial node daemon.
+
+  This creates the nodepass file containing the shared password for
+  the cluster and also generates the SSL certificate.
+
+  """
+  # Create pseudo random password
+  randpass = sha.new(os.urandom(64)).hexdigest()
+  # and write it into sstore
+  ss.SetKey(ss.SS_NODED_PASS, randpass)
+
+  result = utils.RunCmd(["openssl", "req", "-new", "-newkey", "rsa:1024",
+                         "-days", str(365*5), "-nodes", "-x509",
+                         "-keyout", constants.SSL_CERT_FILE,
+                         "-out", constants.SSL_CERT_FILE, "-batch"])
+  if result.failed:
+    raise errors.OpExecError, ("could not generate server ssl cert, command"
+                               " %s had exitcode %s and error message %s" %
+                               (result.cmd, result.exit_code, result.output))
+
+  os.chmod(constants.SSL_CERT_FILE, 0400)
+
+  result = utils.RunCmd([constants.NODE_INITD_SCRIPT, "restart"])
+
+  if result.failed:
+    raise errors.OpExecError, ("could not start the node daemon, command %s"
+                               " had exitcode %s and error %s" %
+                               (result.cmd, result.exit_code, result.output))
+
+
+def _InitClusterInterface(fullname, name, ip):
+  """Initialize the master startup script.
+
+  """
+  f = file(constants.CLUSTER_NAME_FILE, 'w')
+  f.write("%s\n" % fullname)
+  f.close()
+
+  f = file(constants.MASTER_INITD_SCRIPT, 'w')
+  f.write ("#!/bin/sh\n")
+  f.write ("\n")
+  f.write ("# Start Ganeti Master Virtual Address\n")
+  f.write ("\n")
+  f.write ("DESC=\"Ganeti Master IP\"\n")
+  f.write ("MASTERNAME=\"%s\"\n" % name)
+  f.write ("MASTERIP=\"%s\"\n" % ip)
+  f.write ("case \"$1\" in\n")
+  f.write ("  start)\n")
+  f.write ("    if fping -q -c 3 ${MASTERIP} &>/dev/null; then\n")
+  f.write ("        echo \"$MASTERNAME no-go - there is already a master.\"\n")
+  f.write ("        rm -f %s\n" % constants.MASTER_CRON_LINK)
+  f.write ("        scp ${MASTERNAME}:%s %s\n" %
+           (constants.CLUSTER_CONF_FILE, constants.CLUSTER_CONF_FILE))
+  f.write ("    else\n")
+  f.write ("        echo -n \"Starting $DESC: \"\n")
+  f.write ("        ip address add ${MASTERIP}/32 dev xen-br0"
+           " label xen-br0:0\n")
+  f.write ("        arping -q -U -c 3 -I xen-br0 -s ${MASTERIP} ${MASTERIP}\n")
+  f.write ("        echo \"$MASTERNAME.\"\n")
+  f.write ("    fi\n")
+  f.write ("    ;;\n")
+  f.write ("  stop)\n")
+  f.write ("    echo -n \"Stopping $DESC: \"\n")
+  f.write ("    ip address del ${MASTERIP}/32 dev xen-br0\n")
+  f.write ("    echo \"$MASTERNAME.\"\n")
+  f.write ("    ;;\n")
+  f.write ("  *)\n")
+  f.write ("    echo \"Usage: $0 {start|stop}\" >&2\n")
+  f.write ("    exit 1\n")
+  f.write ("    ;;\n")
+  f.write ("esac\n")
+  f.write ("\n")
+  f.write ("exit 0\n")
+  f.flush()
+  os.fsync(f.fileno())
+  f.close()
+  os.chmod(constants.MASTER_INITD_SCRIPT, 0755)
+
+
+class LUInitCluster(LogicalUnit):
+  """Initialise the cluster.
+
+  """
+  HPATH = "cluster-init"
+  HTYPE = constants.HTYPE_CLUSTER
+  _OP_REQP = ["cluster_name", "hypervisor_type", "vg_name", "mac_prefix",
+              "def_bridge"]
+  REQ_CLUSTER = False
+
+  def BuildHooksEnv(self):
+    """Build hooks env.
+
+    Notes: Since we don't require a cluster, we must manually add
+    ourselves in the post-run node list.
+
+    """
+
+    env = {"CLUSTER": self.op.cluster_name,
+           "MASTER": self.hostname}
+    return env, [], [self.hostname['hostname_full']]
+
+  def CheckPrereq(self):
+    """Verify that the passed name is a valid one.
+
+    """
+    if config.ConfigWriter.IsCluster():
+      raise errors.OpPrereqError, ("Cluster is already initialised")
+
+    hostname_local = socket.gethostname()
+    self.hostname = hostname = utils.LookupHostname(hostname_local)
+    if not hostname:
+      raise errors.OpPrereqError, ("Cannot resolve my own hostname ('%s')" %
+                                   hostname_local)
+
+    self.clustername = clustername = utils.LookupHostname(self.op.cluster_name)
+    if not clustername:
+      raise errors.OpPrereqError, ("Cannot resolve given cluster name ('%s')"
+                                   % self.op.cluster_name)
+
+    result = utils.RunCmd(["fping", "-S127.0.0.1", "-q", hostname['ip']])
+    if result.failed:
+      raise errors.OpPrereqError, ("Inconsistency: this host's name resolves"
+                                   " to %s,\nbut this ip address does not"
+                                   " belong to this host."
+                                   " Aborting." % hostname['ip'])
+
+    secondary_ip = getattr(self.op, "secondary_ip", None)
+    if secondary_ip and not utils.IsValidIP(secondary_ip):
+      raise errors.OpPrereqError, ("Invalid secondary ip given")
+    if secondary_ip and secondary_ip != hostname['ip']:
+      result = utils.RunCmd(["fping", "-S127.0.0.1", "-q", secondary_ip])
+      if result.failed:
+        raise errors.OpPrereqError, ("You gave %s as secondary IP,\n"
+                                     "but it does not belong to this host." %
+                                     secondary_ip)
+    self.secondary_ip = secondary_ip
+
+    # checks presence of the volume group given
+    vgstatus = _HasValidVG(utils.ListVolumeGroups(), self.op.vg_name)
+
+    if vgstatus:
+      raise errors.OpPrereqError, ("Error: %s" % vgstatus)
+
+    if not re.match("^[0-9a-z]{2}:[0-9a-z]{2}:[0-9a-z]{2}$",
+                    self.op.mac_prefix):
+      raise errors.OpPrereqError, ("Invalid mac prefix given '%s'" %
+                                   self.op.mac_prefix)
+
+    if self.op.hypervisor_type not in hypervisor.VALID_HTYPES:
+      raise errors.OpPrereqError, ("Invalid hypervisor type given '%s'" %
+                                   self.op.hypervisor_type)
+
+  def Exec(self, feedback_fn):
+    """Initialize the cluster.
+
+    """
+    clustername = self.clustername
+    hostname = self.hostname
+
+    # adds the cluste name file and master startup script
+    _InitClusterInterface(clustername['hostname_full'],
+                          clustername['hostname'],
+                          clustername['ip'])
+
+    # set up the simple store
+    ss = ssconf.SimpleStore()
+    ss.SetKey(ss.SS_HYPERVISOR, self.op.hypervisor_type)
+
+    # set up the inter-node password and certificate
+    _InitGanetiServerSetup(ss)
+
+    # start the master ip
+    rpc.call_node_start_master(hostname['hostname_full'])
+
+    # set up ssh config and /etc/hosts
+    f = open('/etc/ssh/ssh_host_rsa_key.pub', 'r')
+    try:
+      sshline = f.read()
+    finally:
+      f.close()
+    sshkey = sshline.split(" ")[1]
+
+    _UpdateEtcHosts(hostname['hostname_full'],
+                    hostname['ip'],
+                    )
+
+    _UpdateKnownHosts(hostname['hostname_full'],
+                      hostname['ip'],
+                      sshkey,
+                      )
+
+    _InitSSHSetup(hostname['hostname'])
+
+    # init of cluster config file
+    cfgw = config.ConfigWriter()
+    cfgw.InitConfig(hostname['hostname'], hostname['ip'], self.secondary_ip,
+                    clustername['hostname'], sshkey, self.op.mac_prefix,
+                    self.op.vg_name, self.op.def_bridge)
+
+
+class LUDestroyCluster(NoHooksLU):
+  """Logical unit for destroying the cluster.
+
+  """
+  _OP_REQP = []
+
+  def CheckPrereq(self):
+    """Check prerequisites.
+
+    This checks whether the cluster is empty.
+
+    Any errors are signalled by raising errors.OpPrereqError.
+
+    """
+    master = self.cfg.GetMaster()
+
+    nodelist = self.cfg.GetNodeList()
+    if len(nodelist) > 0 and nodelist != [master]:
+        raise errors.OpPrereqError, ("There are still %d node(s) in "
+                                     "this cluster." % (len(nodelist) - 1))
+
+  def Exec(self, feedback_fn):
+    """Destroys the cluster.
+
+    """
+    utils.CreateBackup('/root/.ssh/id_dsa')
+    utils.CreateBackup('/root/.ssh/id_dsa.pub')
+    rpc.call_node_leave_cluster(self.cfg.GetMaster())
+
+
+class LUVerifyCluster(NoHooksLU):
+  """Verifies the cluster status.
+
+  """
+  _OP_REQP = []
+
+  def _VerifyNode(self, node, file_list, local_cksum, vglist, node_result,
+                  remote_version, feedback_fn):
+    """Run multiple tests against a node.
+
+    Test list:
+      - compares ganeti version
+      - checks vg existance and size > 20G
+      - checks config file checksum
+      - checks ssh to other nodes
+
+    Args:
+      node: name of the node to check
+      file_list: required list of files
+      local_cksum: dictionary of local files and their checksums
+    """
+    # compares ganeti version
+    local_version = constants.PROTOCOL_VERSION
+    if not remote_version:
+      feedback_fn(" - ERROR: connection to %s failed" % (node))
+      return True
+
+    if local_version != remote_version:
+      feedback_fn("  - ERROR: sw version mismatch: master %s, node(%s) %s" %
+                      (local_version, node, remote_version))
+      return True
+
+    # checks vg existance and size > 20G
+
+    bad = False
+    if not vglist:
+      feedback_fn("  - ERROR: unable to check volume groups on node %s." %
+                      (node,))
+      bad = True
+    else:
+      vgstatus = _HasValidVG(vglist, self.cfg.GetVGName())
+      if vgstatus:
+        feedback_fn("  - ERROR: %s on node %s" % (vgstatus, node))
+        bad = True
+
+    # checks config file checksum
+    # checks ssh to any
+
+    if 'filelist' not in node_result:
+      bad = True
+      feedback_fn("  - ERROR: node hasn't returned file checksum data")
+    else:
+      remote_cksum = node_result['filelist']
+      for file_name in file_list:
+        if file_name not in remote_cksum:
+          bad = True
+          feedback_fn("  - ERROR: file '%s' missing" % file_name)
+        elif remote_cksum[file_name] != local_cksum[file_name]:
+          bad = True
+          feedback_fn("  - ERROR: file '%s' has wrong checksum" % file_name)
+
+    if 'nodelist' not in node_result:
+      bad = True
+      feedback_fn("  - ERROR: node hasn't returned node connectivity data")
+    else:
+      if node_result['nodelist']:
+        bad = True
+        for node in node_result['nodelist']:
+          feedback_fn("  - ERROR: communication with node '%s': %s" %
+                          (node, node_result['nodelist'][node]))
+    hyp_result = node_result.get('hypervisor', None)
+    if hyp_result is not None:
+      feedback_fn("  - ERROR: hypervisor verify failure: '%s'" % hyp_result)
+    return bad
+
+  def _VerifyInstance(self, instance, node_vol_is, node_instance, feedback_fn):
+    """Verify an instance.
+
+    This function checks to see if the required block devices are
+    available on the instance's node.
+
+    """
+    bad = False
+
+    instancelist = self.cfg.GetInstanceList()
+    if not instance in instancelist:
+      feedback_fn("  - ERROR: instance %s not in instance list %s" %
+                      (instance, instancelist))
+      bad = True
+
+    instanceconfig = self.cfg.GetInstanceInfo(instance)
+    node_current = instanceconfig.primary_node
+
+    node_vol_should = {}
+    instanceconfig.MapLVsByNode(node_vol_should)
+
+    for node in node_vol_should:
+      for volume in node_vol_should[node]:
+        if node not in node_vol_is or volume not in node_vol_is[node]:
+          feedback_fn("  - ERROR: volume %s missing on node %s" %
+                          (volume, node))
+          bad = True
+
+    if not instanceconfig.status == 'down':
+      if not instance in node_instance[node_current]:
+        feedback_fn("  - ERROR: instance %s not running on node %s" %
+                        (instance, node_current))
+        bad = True
+
+    for node in node_instance:
+      if (not node == node_current):
+        if instance in node_instance[node]:
+          feedback_fn("  - ERROR: instance %s should not run on node %s" %
+                          (instance, node))
+          bad = True
+
+    return not bad
+
+  def _VerifyOrphanVolumes(self, node_vol_should, node_vol_is, feedback_fn):
+    """Verify if there are any unknown volumes in the cluster.
+
+    The .os, .swap and backup volumes are ignored. All other volumes are
+    reported as unknown.
+
+    """
+    bad = False
+
+    for node in node_vol_is:
+      for volume in node_vol_is[node]:
+        if node not in node_vol_should or volume not in node_vol_should[node]:
+          feedback_fn("  - ERROR: volume %s on node %s should not exist" %
+                      (volume, node))
+          bad = True
+    return bad
+
+
+  def _VerifyOrphanInstances(self, instancelist, node_instance, feedback_fn):
+    """Verify the list of running instances.
+
+    This checks what instances are running but unknown to the cluster.
+
+    """
+    bad = False
+    for node in node_instance:
+      for runninginstance in node_instance[node]:
+        if runninginstance not in instancelist:
+          feedback_fn("  - ERROR: instance %s on node %s should not exist" %
+                          (runninginstance, node))
+          bad = True
+    return bad
+
+  def _VerifyNodeConfigFiles(self, ismaster, node, file_list, feedback_fn):
+    """Verify the list of node config files"""
+
+    bad = False
+    for file_name in constants.MASTER_CONFIGFILES:
+      if ismaster and file_name not in file_list:
+        feedback_fn("  - ERROR: master config file %s missing from master"
+                    " node %s" % (file_name, node))
+        bad = True
+      elif not ismaster and file_name in file_list:
+        feedback_fn("  - ERROR: master config file %s should not exist"
+                    " on non-master node %s" % (file_name, node))
+        bad = True
+
+    for file_name in constants.NODE_CONFIGFILES:
+      if file_name not in file_list:
+        feedback_fn("  - ERROR: config file %s missing from node %s" %
+                    (file_name, node))
+        bad = True
+
+    return bad
+
+  def CheckPrereq(self):
+    """Check prerequisites.
+
+    This has no prerequisites.
+
+    """
+    pass
+
+  def Exec(self, feedback_fn):
+    """Verify integrity of cluster, performing various test on nodes.
+
+    """
+    bad = False
+    feedback_fn("* Verifying global settings")
+    self.cfg.VerifyConfig()
+
+    master = self.cfg.GetMaster()
+    vg_name = self.cfg.GetVGName()
+    nodelist = utils.NiceSort(self.cfg.GetNodeList())
+    instancelist = utils.NiceSort(self.cfg.GetInstanceList())
+    node_volume = {}
+    node_instance = {}
+
+    # FIXME: verify OS list
+    # do local checksums
+    file_names = constants.CLUSTER_CONF_FILES
+    local_checksums = utils.FingerprintFiles(file_names)
+
+    feedback_fn("* Gathering data (%d nodes)" % len(nodelist))
+    all_configfile = rpc.call_configfile_list(nodelist)
+    all_volumeinfo = rpc.call_volume_list(nodelist, vg_name)
+    all_instanceinfo = rpc.call_instance_list(nodelist)
+    all_vglist = rpc.call_vg_list(nodelist)
+    node_verify_param = {
+      'filelist': file_names,
+      'nodelist': nodelist,
+      'hypervisor': None,
+      }
+    all_nvinfo = rpc.call_node_verify(nodelist, node_verify_param)
+    all_rversion = rpc.call_version(nodelist)
+
+    for node in nodelist:
+      feedback_fn("* Verifying node %s" % node)
+      result = self._VerifyNode(node, file_names, local_checksums,
+                                all_vglist[node], all_nvinfo[node],
+                                all_rversion[node], feedback_fn)
+      bad = bad or result
+      # node_configfile
+      nodeconfigfile = all_configfile[node]
+
+      if not nodeconfigfile:
+        feedback_fn("  - ERROR: connection to %s failed" % (node))
+        bad = True
+        continue
+
+      bad = bad or self._VerifyNodeConfigFiles(node==master, node,
+                                               nodeconfigfile, feedback_fn)
+
+      # node_volume
+      volumeinfo = all_volumeinfo[node]
+
+      if type(volumeinfo) != dict:
+        feedback_fn("  - ERROR: connection to %s failed" % (node,))
+        bad = True
+        continue
+
+      node_volume[node] = volumeinfo
+
+      # node_instance
+      nodeinstance = all_instanceinfo[node]
+      if type(nodeinstance) != list:
+        feedback_fn("  - ERROR: connection to %s failed" % (node,))
+        bad = True
+        continue
+
+      node_instance[node] = nodeinstance
+
+    node_vol_should = {}
+
+    for instance in instancelist:
+      feedback_fn("* Verifying instance %s" % instance)
+      result =  self._VerifyInstance(instance, node_volume, node_instance,
+                                     feedback_fn)
+      bad = bad or result
+
+      inst_config = self.cfg.GetInstanceInfo(instance)
+
+      inst_config.MapLVsByNode(node_vol_should)
+
+    feedback_fn("* Verifying orphan volumes")
+    result = self._VerifyOrphanVolumes(node_vol_should, node_volume,
+                                       feedback_fn)
+    bad = bad or result
+
+    feedback_fn("* Verifying remaining instances")
+    result = self._VerifyOrphanInstances(instancelist, node_instance,
+                                         feedback_fn)
+    bad = bad or result
+
+    return int(bad)
+
+
+def _WaitForSync(cfgw, instance, oneshot=False, unlock=False):
+  """Sleep and poll for an instance's disk to sync.
+
+  """
+  if not instance.disks:
+    return True
+
+  if not oneshot:
+    logger.ToStdout("Waiting for instance %s to sync disks." % instance.name)
+
+  node = instance.primary_node
+
+  for dev in instance.disks:
+    cfgw.SetDiskID(dev, node)
+
+  retries = 0
+  while True:
+    max_time = 0
+    done = True
+    cumul_degraded = False
+    rstats = rpc.call_blockdev_getmirrorstatus(node, instance.disks)
+    if not rstats:
+      logger.ToStderr("Can't get any data from node %s" % node)
+      retries += 1
+      if retries >= 10:
+        raise errors.RemoteError, ("Can't contact node %s for mirror data,"
+                                   " aborting." % node)
+      time.sleep(6)
+      continue
+    retries = 0
+    for i in range(len(rstats)):
+      mstat = rstats[i]
+      if mstat is None:
+        logger.ToStderr("Can't compute data for node %s/%s" %
+                        (node, instance.disks[i].iv_name))
+        continue
+      perc_done, est_time, is_degraded = mstat
+      cumul_degraded = cumul_degraded or (is_degraded and perc_done is None)
+      if perc_done is not None:
+        done = False
+        if est_time is not None:
+          rem_time = "%d estimated seconds remaining" % est_time
+          max_time = est_time
+        else:
+          rem_time = "no time estimate"
+        logger.ToStdout("- device %s: %5.2f%% done, %s" %
+                        (instance.disks[i].iv_name, perc_done, rem_time))
+    if done or oneshot:
+      break
+
+    if unlock:
+      utils.Unlock('cmd')
+    try:
+      time.sleep(min(60, max_time))
+    finally:
+      if unlock:
+        utils.Lock('cmd')
+
+  if done:
+    logger.ToStdout("Instance %s's disks are in sync." % instance.name)
+  return not cumul_degraded
+
+
+def _CheckDiskConsistency(cfgw, dev, node, on_primary):
+  """Check that mirrors are not degraded.
+
+  """
+
+  cfgw.SetDiskID(dev, node)
+
+  result = True
+  if on_primary or dev.AssembleOnSecondary():
+    rstats = rpc.call_blockdev_find(node, dev)
+    if not rstats:
+      logger.ToStderr("Can't get any data from node %s" % node)
+      result = False
+    else:
+      result = result and (not rstats[5])
+  if dev.children:
+    for child in dev.children:
+      result = result and _CheckDiskConsistency(cfgw, child, node, on_primary)
+
+  return result
+
+
+class LUDiagnoseOS(NoHooksLU):
+  """Logical unit for OS diagnose/query.
+
+  """
+  _OP_REQP = []
+
+  def CheckPrereq(self):
+    """Check prerequisites.
+
+    This always succeeds, since this is a pure query LU.
+
+    """
+    return
+
+  def Exec(self, feedback_fn):
+    """Compute the list of OSes.
+
+    """
+    node_list = self.cfg.GetNodeList()
+    node_data = rpc.call_os_diagnose(node_list)
+    if node_data == False:
+      raise errors.OpExecError, "Can't gather the list of OSes"
+    return node_data
+
+
+class LURemoveNode(LogicalUnit):
+  """Logical unit for removing a node.
+
+  """
+  HPATH = "node-remove"
+  HTYPE = constants.HTYPE_NODE
+  _OP_REQP = ["node_name"]
+
+  def BuildHooksEnv(self):
+    """Build hooks env.
+
+    This doesn't run on the target node in the pre phase as a failed
+    node would not allows itself to run.
+
+    """
+    all_nodes = self.cfg.GetNodeList()
+    all_nodes.remove(self.op.node_name)
+    return {"NODE_NAME": self.op.node_name}, all_nodes, all_nodes
+
+  def CheckPrereq(self):
+    """Check prerequisites.
+
+    This checks:
+     - the node exists in the configuration
+     - it does not have primary or secondary instances
+     - it's not the master
+
+    Any errors are signalled by raising errors.OpPrereqError.
+
+    """
+
+    node = self.cfg.GetNodeInfo(self.cfg.ExpandNodeName(self.op.node_name))
+    if node is None:
+      logger.Error("Error: Node '%s' is unknown." % self.op.node_name)
+      return 1
+
+    instance_list = self.cfg.GetInstanceList()
+
+    masternode = self.cfg.GetMaster()
+    if node.name == masternode:
+      raise errors.OpPrereqError, ("Node is the master node,"
+                                   " you need to failover first.")
+
+    for instance_name in instance_list:
+      instance = self.cfg.GetInstanceInfo(instance_name)
+      if node.name == instance.primary_node:
+        raise errors.OpPrereqError, ("Instance %s still running on the node,"
+                                     " please remove first." % instance_name)
+      if node.name in instance.secondary_nodes:
+        raise errors.OpPrereqError, ("Instance %s has node as a secondary,"
+                                     " please remove first." % instance_name)
+    self.op.node_name = node.name
+    self.node = node
+
+  def Exec(self, feedback_fn):
+    """Removes the node from the cluster.
+
+    """
+    node = self.node
+    logger.Info("stopping the node daemon and removing configs from node %s" %
+                node.name)
+
+    rpc.call_node_leave_cluster(node.name)
+
+    ssh.SSHCall(node.name, 'root', "%s stop" % constants.NODE_INITD_SCRIPT)
+
+    logger.Info("Removing node %s from config" % node.name)
+
+    self.cfg.RemoveNode(node.name)
+
+
+class LUQueryNodes(NoHooksLU):
+  """Logical unit for querying nodes.
+
+  """
+  _OP_REQP = ["output_fields"]
+
+  def CheckPrereq(self):
+    """Check prerequisites.
+
+    This checks that the fields required are valid output fields.
+
+    """
+    self.static_fields = frozenset(["name", "pinst", "sinst", "pip", "sip"])
+    self.dynamic_fields = frozenset(["dtotal", "dfree",
+                                     "mtotal", "mnode", "mfree"])
+    self.all_fields = self.static_fields | self.dynamic_fields
+
+    if not self.all_fields.issuperset(self.op.output_fields):
+      raise errors.OpPrereqError, ("Unknown output fields selected: %s"
+                                   % ",".join(frozenset(self.op.output_fields).
+                                              difference(self.all_fields)))
+
+
+  def Exec(self, feedback_fn):
+    """Computes the list of nodes and their attributes.
+
+    """
+    nodenames = utils.NiceSort(self.cfg.GetNodeList())
+    nodelist = [self.cfg.GetNodeInfo(name) for name in nodenames]
+
+
+    # begin data gathering
+
+    if self.dynamic_fields.intersection(self.op.output_fields):
+      live_data = {}
+      node_data = rpc.call_node_info(nodenames, self.cfg.GetVGName())
+      for name in nodenames:
+        nodeinfo = node_data.get(name, None)
+        if nodeinfo:
+          live_data[name] = {
+            "mtotal": utils.TryConvert(int, nodeinfo['memory_total']),
+            "mnode": utils.TryConvert(int, nodeinfo['memory_dom0']),
+            "mfree": utils.TryConvert(int, nodeinfo['memory_free']),
+            "dtotal": utils.TryConvert(int, nodeinfo['vg_size']),
+            "dfree": utils.TryConvert(int, nodeinfo['vg_free']),
+            }
+        else:
+          live_data[name] = {}
+    else:
+      live_data = dict.fromkeys(nodenames, {})
+
+    node_to_primary = dict.fromkeys(nodenames, 0)
+    node_to_secondary = dict.fromkeys(nodenames, 0)
+
+    if "pinst" in self.op.output_fields or "sinst" in self.op.output_fields:
+      instancelist = self.cfg.GetInstanceList()
+
+      for instance in instancelist:
+        instanceinfo = self.cfg.GetInstanceInfo(instance)
+        node_to_primary[instanceinfo.primary_node] += 1
+        for secnode in instanceinfo.secondary_nodes:
+          node_to_secondary[secnode] += 1
+
+    # end data gathering
+
+    output = []
+    for node in nodelist:
+      node_output = []
+      for field in self.op.output_fields:
+        if field == "name":
+          val = node.name
+        elif field == "pinst":
+          val = node_to_primary[node.name]
+        elif field == "sinst":
+          val = node_to_secondary[node.name]
+        elif field == "pip":
+          val = node.primary_ip
+        elif field == "sip":
+          val = node.secondary_ip
+        elif field in self.dynamic_fields:
+          val = live_data[node.name].get(field, "?")
+        else:
+          raise errors.ParameterError, field
+        val = str(val)
+        node_output.append(val)
+      output.append(node_output)
+
+    return output
+
+
+def _CheckNodesDirs(node_list, paths):
+  """Verify if the given nodes have the same files.
+
+  Args:
+    node_list: the list of node names to check
+    paths: the list of directories to checksum and compare
+
+  Returns:
+    list of (node, different_file, message); if empty, the files are in sync
+
+  """
+  file_names = []
+  for dir_name in paths:
+    flist = [os.path.join(dir_name, name) for name in os.listdir(dir_name)]
+    flist = [name for name in flist if os.path.isfile(name)]
+    file_names.extend(flist)
+
+  local_checksums = utils.FingerprintFiles(file_names)
+
+  results = []
+  verify_params = {'filelist': file_names}
+  all_node_results = rpc.call_node_verify(node_list, verify_params)
+  for node_name in node_list:
+    node_result = all_node_results.get(node_name, False)
+    if not node_result or 'filelist' not in node_result:
+      results.append((node_name, "'all files'", "node communication error"))
+      continue
+    remote_checksums = node_result['filelist']
+    for fname in local_checksums:
+      if fname not in remote_checksums:
+        results.append((node_name, fname, "missing file"))
+      elif remote_checksums[fname] != local_checksums[fname]:
+        results.append((node_name, fname, "wrong checksum"))
+  return results
+
+
+class LUAddNode(LogicalUnit):
+  """Logical unit for adding node to the cluster.
+
+  """
+  HPATH = "node-add"
+  HTYPE = constants.HTYPE_NODE
+  _OP_REQP = ["node_name"]
+
+  def BuildHooksEnv(self):
+    """Build hooks env.
+
+    This will run on all nodes before, and on all nodes + the new node after.
+
+    """
+    env = {
+      "NODE_NAME": self.op.node_name,
+      "NODE_PIP": self.op.primary_ip,
+      "NODE_SIP": self.op.secondary_ip,
+      }
+    nodes_0 = self.cfg.GetNodeList()
+    nodes_1 = nodes_0 + [self.op.node_name, ]
+    return env, nodes_0, nodes_1
+
+  def CheckPrereq(self):
+    """Check prerequisites.
+
+    This checks:
+     - the new node is not already in the config
+     - it is resolvable
+     - its parameters (single/dual homed) matches the cluster
+
+    Any errors are signalled by raising errors.OpPrereqError.
+
+    """
+    node_name = self.op.node_name
+    cfg = self.cfg
+
+    dns_data = utils.LookupHostname(node_name)
+    if not dns_data:
+      raise errors.OpPrereqError, ("Node %s is not resolvable" % node_name)
+
+    node = dns_data['hostname']
+    primary_ip = self.op.primary_ip = dns_data['ip']
+    secondary_ip = getattr(self.op, "secondary_ip", None)
+    if secondary_ip is None:
+      secondary_ip = primary_ip
+    if not utils.IsValidIP(secondary_ip):
+      raise errors.OpPrereqError, ("Invalid secondary IP given")
+    self.op.secondary_ip = secondary_ip
+    node_list = cfg.GetNodeList()
+    if node in node_list:
+      raise errors.OpPrereqError, ("Node %s is already in the configuration"
+                                   % node)
+
+    for existing_node_name in node_list:
+      existing_node = cfg.GetNodeInfo(existing_node_name)
+      if (existing_node.primary_ip == primary_ip or
+          existing_node.secondary_ip == primary_ip or
+          existing_node.primary_ip == secondary_ip or
+          existing_node.secondary_ip == secondary_ip):
+        raise errors.OpPrereqError, ("New node ip address(es) conflict with"
+                                     " existing node %s" % existing_node.name)
+
+    # check that the type of the node (single versus dual homed) is the
+    # same as for the master
+    myself = cfg.GetNodeInfo(cfg.GetMaster())
+    master_singlehomed = myself.secondary_ip == myself.primary_ip
+    newbie_singlehomed = secondary_ip == primary_ip
+    if master_singlehomed != newbie_singlehomed:
+      if master_singlehomed:
+        raise errors.OpPrereqError, ("The master has no private ip but the"
+                                     " new node has one")
+      else:
+        raise errors.OpPrereqError ("The master has a private ip but the"
+                                    " new node doesn't have one")
+
+    # checks reachablity
+    command = ["fping", "-q", primary_ip]
+    result = utils.RunCmd(command)
+    if result.failed:
+      raise errors.OpPrereqError, ("Node not reachable by ping")
+
+    if not newbie_singlehomed:
+      # check reachability from my secondary ip to newbie's secondary ip
+      command = ["fping", "-S%s" % myself.secondary_ip, "-q", secondary_ip]
+      result = utils.RunCmd(command)
+      if result.failed:
+        raise errors.OpPrereqError, ("Node secondary ip not reachable by ping")
+
+    self.new_node = objects.Node(name=node,
+                                 primary_ip=primary_ip,
+                                 secondary_ip=secondary_ip)
+
+  def Exec(self, feedback_fn):
+    """Adds the new node to the cluster.
+
+    """
+    new_node = self.new_node
+    node = new_node.name
+
+    # set up inter-node password and certificate and restarts the node daemon
+    gntpass = self.sstore.GetNodeDaemonPassword()
+    if not re.match('^[a-zA-Z0-9.]{1,64}$', gntpass):
+      raise errors.OpExecError, ("ganeti password corruption detected")
+    f = open(constants.SSL_CERT_FILE)
+    try:
+      gntpem = f.read(8192)
+    finally:
+      f.close()
+    # in the base64 pem encoding, neither '!' nor '.' are valid chars,
+    # so we use this to detect an invalid certificate; as long as the
+    # cert doesn't contain this, the here-document will be correctly
+    # parsed by the shell sequence below
+    if re.search('^!EOF\.', gntpem, re.MULTILINE):
+      raise errors.OpExecError, ("invalid PEM encoding in the SSL certificate")
+    if not gntpem.endswith("\n"):
+      raise errors.OpExecError, ("PEM must end with newline")
+    logger.Info("copy cluster pass to %s and starting the node daemon" % node)
+
+    # remove first the root's known_hosts file
+    utils.RemoveFile("/root/.ssh/known_hosts")
+    # and then connect with ssh to set password and start ganeti-noded
+    # note that all the below variables are sanitized at this point,
+    # either by being constants or by the checks above
+    ss = self.sstore
+    mycommand = ("umask 077 && "
+                 "echo '%s' > '%s' && "
+                 "cat > '%s' << '!EOF.' && \n"
+                 "%s!EOF.\n%s restart" %
+                 (gntpass, ss.KeyToFilename(ss.SS_NODED_PASS),
+                  constants.SSL_CERT_FILE, gntpem,
+                  constants.NODE_INITD_SCRIPT))
+
+    result = ssh.SSHCall(node, 'root', mycommand, batch=False, ask_key=True)
+    if result.failed:
+      raise errors.OpExecError, ("Remote command on node %s, error: %s,"
+                                 " output: %s" %
+                                 (node, result.fail_reason, result.output))
+
+    # check connectivity
+    time.sleep(4)
+
+    result = rpc.call_version([node])[node]
+    if result:
+      if constants.PROTOCOL_VERSION == result:
+        logger.Info("communication to node %s fine, sw version %s match" %
+                    (node, result))
+      else:
+        raise errors.OpExecError, ("Version mismatch master version %s,"
+                                   " node version %s" %
+                                   (constants.PROTOCOL_VERSION, result))
+    else:
+      raise errors.OpExecError, ("Cannot get version from the new node")
+
+    # setup ssh on node
+    logger.Info("copy ssh key to node %s" % node)
+    keyarray = []
+    keyfiles = ["/etc/ssh/ssh_host_dsa_key", "/etc/ssh/ssh_host_dsa_key.pub",
+                "/etc/ssh/ssh_host_rsa_key", "/etc/ssh/ssh_host_rsa_key.pub",
+                "/root/.ssh/id_dsa", "/root/.ssh/id_dsa.pub"]
+
+    for i in keyfiles:
+      f = open(i, 'r')
+      try:
+        keyarray.append(f.read())
+      finally:
+        f.close()
+
+    result = rpc.call_node_add(node, keyarray[0], keyarray[1], keyarray[2],
+                               keyarray[3], keyarray[4], keyarray[5])
+
+    if not result:
+      raise errors.OpExecError, ("Cannot transfer ssh keys to the new node")
+
+    # Add node to our /etc/hosts, and add key to known_hosts
+    _UpdateEtcHosts(new_node.name, new_node.primary_ip)
+    _UpdateKnownHosts(new_node.name, new_node.primary_ip,
+                      self.cfg.GetHostKey())
+
+    if new_node.secondary_ip != new_node.primary_ip:
+      result = ssh.SSHCall(node, "root",
+                           "fping -S 127.0.0.1 -q %s" % new_node.secondary_ip)
+      if result.failed:
+        raise errors.OpExecError, ("Node claims it doesn't have the"
+                                   " secondary ip you gave (%s).\n"
+                                   "Please fix and re-run this command." %
+                                   new_node.secondary_ip)
+
+    # Distribute updated /etc/hosts and known_hosts to all nodes,
+    # including the node just added
+    myself = self.cfg.GetNodeInfo(self.cfg.GetMaster())
+    dist_nodes = self.cfg.GetNodeList() + [node]
+    if myself.name in dist_nodes:
+      dist_nodes.remove(myself.name)
+
+    logger.Debug("Copying hosts and known_hosts to all nodes")
+    for fname in ("/etc/hosts", "/etc/ssh/ssh_known_hosts"):
+      result = rpc.call_upload_file(dist_nodes, fname)
+      for to_node in dist_nodes:
+        if not result[to_node]:
+          logger.Error("copy of file %s to node %s failed" %
+                       (fname, to_node))
+
+    to_copy = [constants.MASTER_CRON_FILE,
+               constants.MASTER_INITD_SCRIPT,
+               constants.CLUSTER_NAME_FILE]
+    to_copy.extend(ss.GetFileList())
+    for fname in to_copy:
+      if not ssh.CopyFileToNode(node, fname):
+        logger.Error("could not copy file %s to node %s" % (fname, node))
+
+    logger.Info("adding node %s to cluster.conf" % node)
+    self.cfg.AddNode(new_node)
+
+
+class LUMasterFailover(LogicalUnit):
+  """Failover the master node to the current node.
+
+  This is a special LU in that it must run on a non-master node.
+
+  """
+  HPATH = "master-failover"
+  HTYPE = constants.HTYPE_CLUSTER
+  REQ_MASTER = False
+  _OP_REQP = []
+
+  def BuildHooksEnv(self):
+    """Build hooks env.
+
+    This will run on the new master only in the pre phase, and on all
+    the nodes in the post phase.
+
+    """
+    env = {
+      "NEW_MASTER": self.new_master,
+      "OLD_MASTER": self.old_master,
+      }
+    return env, [self.new_master], self.cfg.GetNodeList()
+
+  def CheckPrereq(self):
+    """Check prerequisites.
+
+    This checks that we are not already the master.
+
+    """
+    self.new_master = socket.gethostname()
+
+    self.old_master = self.cfg.GetMaster()
+
+    if self.old_master == self.new_master:
+      raise errors.OpPrereqError, ("This commands must be run on the node"
+                                   " where you want the new master to be.\n"
+                                   "%s is already the master" %
+                                   self.old_master)
+
+  def Exec(self, feedback_fn):
+    """Failover the master node.
+
+    This command, when run on a non-master node, will cause the current
+    master to cease being master, and the non-master to become new
+    master.
+
+    """
+
+    #TODO: do not rely on gethostname returning the FQDN
+    logger.Info("setting master to %s, old master: %s" %
+                (self.new_master, self.old_master))
+
+    if not rpc.call_node_stop_master(self.old_master):
+      logger.Error("could disable the master role on the old master"
+                   " %s, please disable manually" % self.old_master)
+
+    if not rpc.call_node_start_master(self.new_master):
+      logger.Error("could not start the master role on the new master"
+                   " %s, please check" % self.new_master)
+
+    self.cfg.SetMaster(self.new_master)
+
+
+class LUQueryClusterInfo(NoHooksLU):
+  """Query cluster configuration.
+
+  """
+  _OP_REQP = []
+
+  def CheckPrereq(self):
+    """No prerequsites needed for this LU.
+
+    """
+    pass
+
+  def Exec(self, feedback_fn):
+    """Return cluster config.
+
+    """
+    instances = [self.cfg.GetInstanceInfo(name)
+                 for name in self.cfg.GetInstanceList()]
+    result = {
+      "name": self.cfg.GetClusterName(),
+      "software_version": constants.RELEASE_VERSION,
+      "protocol_version": constants.PROTOCOL_VERSION,
+      "config_version": constants.CONFIG_VERSION,
+      "os_api_version": constants.OS_API_VERSION,
+      "export_version": constants.EXPORT_VERSION,
+      "master": self.cfg.GetMaster(),
+      "architecture": (platform.architecture()[0], platform.machine()),
+      "instances": [(instance.name, instance.primary_node)
+                    for instance in instances],
+      "nodes": self.cfg.GetNodeList(),
+      }
+
+    return result
+
+
+class LUClusterCopyFile(NoHooksLU):
+  """Copy file to cluster.
+
+  """
+  _OP_REQP = ["nodes", "filename"]
+
+  def CheckPrereq(self):
+    """Check prerequisites.
+
+    It should check that the named file exists and that the given list
+    of nodes is valid.
+
+    """
+    if not os.path.exists(self.op.filename):
+      raise errors.OpPrereqError("No such filename '%s'" % self.op.filename)
+    if self.op.nodes:
+      nodes = self.op.nodes
+    else:
+      nodes = self.cfg.GetNodeList()
+    self.nodes = []
+    for node in nodes:
+      nname = self.cfg.ExpandNodeName(node)
+      if nname is None:
+        raise errors.OpPrereqError, ("Node '%s' is unknown." % node)
+      self.nodes.append(nname)
+
+  def Exec(self, feedback_fn):
+    """Copy a file from master to some nodes.
+
+    Args:
+      opts - class with options as members
+      args - list containing a single element, the file name
+    Opts used:
+      nodes - list containing the name of target nodes; if empty, all nodes
+
+    """
+    filename = self.op.filename
+
+    myname = socket.gethostname()
+
+    for node in self.nodes:
+      if node == myname:
+        continue
+      if not ssh.CopyFileToNode(node, filename):
+        logger.Error("Copy of file %s to node %s failed" % (filename, node))
+
+
+class LUDumpClusterConfig(NoHooksLU):
+  """Return a text-representation of the cluster-config.
+
+  """
+  _OP_REQP = []
+
+  def CheckPrereq(self):
+    """No prerequisites.
+
+    """
+    pass
+
+  def Exec(self, feedback_fn):
+    """Dump a representation of the cluster config to the standard output.
+
+    """
+    return self.cfg.DumpConfig()
+
+
+class LURunClusterCommand(NoHooksLU):
+  """Run a command on some nodes.
+
+  """
+  _OP_REQP = ["command", "nodes"]
+
+  def CheckPrereq(self):
+    """Check prerequisites.
+
+    It checks that the given list of nodes is valid.
+
+    """
+    if self.op.nodes:
+      nodes = self.op.nodes
+    else:
+      nodes = self.cfg.GetNodeList()
+    self.nodes = []
+    for node in nodes:
+      nname = self.cfg.ExpandNodeName(node)
+      if nname is None:
+        raise errors.OpPrereqError, ("Node '%s' is unknown." % node)
+      self.nodes.append(nname)
+
+  def Exec(self, feedback_fn):
+    """Run a command on some nodes.
+
+    """
+    data = []
+    for node in self.nodes:
+      result = utils.RunCmd(["ssh", node, self.op.command])
+      data.append((node, result.cmd, result.output, result.exit_code))
+
+    return data
+
+
+class LUActivateInstanceDisks(NoHooksLU):
+  """Bring up an instance's disks.
+
+  """
+  _OP_REQP = ["instance_name"]
+
+  def CheckPrereq(self):
+    """Check prerequisites.
+
+    This checks that the instance is in the cluster.
+
+    """
+    instance = self.cfg.GetInstanceInfo(
+      self.cfg.ExpandInstanceName(self.op.instance_name))
+    if instance is None:
+      raise errors.OpPrereqError, ("Instance '%s' not known" %
+                                   self.op.instance_name)
+    self.instance = instance
+
+
+  def Exec(self, feedback_fn):
+    """Activate the disks.
+
+    """
+    disks_ok, disks_info = _AssembleInstanceDisks(self.instance, self.cfg)
+    if not disks_ok:
+      raise errors.OpExecError, ("Cannot activate block devices")
+
+    return disks_info
+
+
+def _AssembleInstanceDisks(instance, cfg, ignore_secondaries=False):
+  """Prepare the block devices for an instance.
+
+  This sets up the block devices on all nodes.
+
+  Args:
+    instance: a ganeti.objects.Instance object
+    ignore_secondaries: if true, errors on secondary nodes won't result
+                        in an error return from the function
+
+  Returns:
+    false if the operation failed
+    list of (host, instance_visible_name, node_visible_name) if the operation
+         suceeded with the mapping from node devices to instance devices
+  """
+  device_info = []
+  disks_ok = True
+  for inst_disk in instance.disks:
+    master_result = None
+    for node, node_disk in inst_disk.ComputeNodeTree(instance.primary_node):
+      cfg.SetDiskID(node_disk, node)
+      is_primary = node == instance.primary_node
+      result = rpc.call_blockdev_assemble(node, node_disk, is_primary)
+      if not result:
+        logger.Error("could not prepare block device %s on node %s (is_pri"
+                     "mary=%s)" % (inst_disk.iv_name, node, is_primary))
+        if is_primary or not ignore_secondaries:
+          disks_ok = False
+      if is_primary:
+        master_result = result
+    device_info.append((instance.primary_node, inst_disk.iv_name,
+                        master_result))
+
+  return disks_ok, device_info
+
+
+class LUDeactivateInstanceDisks(NoHooksLU):
+  """Shutdown an instance's disks.
+
+  """
+  _OP_REQP = ["instance_name"]
+
+  def CheckPrereq(self):
+    """Check prerequisites.
+
+    This checks that the instance is in the cluster.
+
+    """
+    instance = self.cfg.GetInstanceInfo(
+      self.cfg.ExpandInstanceName(self.op.instance_name))
+    if instance is None:
+      raise errors.OpPrereqError, ("Instance '%s' not known" %
+                                   self.op.instance_name)
+    self.instance = instance
+
+  def Exec(self, feedback_fn):
+    """Deactivate the disks
+
+    """
+    instance = self.instance
+    ins_l = rpc.call_instance_list([instance.primary_node])
+    ins_l = ins_l[instance.primary_node]
+    if not type(ins_l) is list:
+      raise errors.OpExecError, ("Can't contact node '%s'" %
+                                 instance.primary_node)
+
+    if self.instance.name in ins_l:
+      raise errors.OpExecError, ("Instance is running, can't shutdown"
+                                 " block devices.")
+
+    _ShutdownInstanceDisks(instance, self.cfg)
+
+
+def _ShutdownInstanceDisks(instance, cfg, ignore_primary=False):
+  """Shutdown block devices of an instance.
+
+  This does the shutdown on all nodes of the instance.
+
+  If the ignore_primary is false, errors on the primary node are
+  ignored.
+
+  """
+  result = True
+  for disk in instance.disks:
+    for node, top_disk in disk.ComputeNodeTree(instance.primary_node):
+      cfg.SetDiskID(top_disk, node)
+      if not rpc.call_blockdev_shutdown(node, top_disk):
+        logger.Error("could not shutdown block device %s on node %s" %
+                     (disk.iv_name, node))
+        if not ignore_primary or node != instance.primary_node:
+          result = False
+  return result
+
+
+class LUStartupInstance(LogicalUnit):
+  """Starts an instance.
+
+  """
+  HPATH = "instance-start"
+  HTYPE = constants.HTYPE_INSTANCE
+  _OP_REQP = ["instance_name", "force"]
+
+  def BuildHooksEnv(self):
+    """Build hooks env.
+
+    This runs on master, primary and secondary nodes of the instance.
+
+    """
+    env = {
+      "INSTANCE_NAME": self.op.instance_name,
+      "INSTANCE_PRIMARY": self.instance.primary_node,
+      "INSTANCE_SECONDARIES": " ".join(self.instance.secondary_nodes),
+      "FORCE": self.op.force,
+      }
+    nl = ([self.cfg.GetMaster(), self.instance.primary_node] +
+          list(self.instance.secondary_nodes))
+    return env, nl, nl
+
+  def CheckPrereq(self):
+    """Check prerequisites.
+
+    This checks that the instance is in the cluster.
+
+    """
+    instance = self.cfg.GetInstanceInfo(
+      self.cfg.ExpandInstanceName(self.op.instance_name))
+    if instance is None:
+      raise errors.OpPrereqError, ("Instance '%s' not known" %
+                                   self.op.instance_name)
+
+    # check bridges existance
+    brlist = [nic.bridge for nic in instance.nics]
+    if not rpc.call_bridges_exist(instance.primary_node, brlist):
+      raise errors.OpPrereqError, ("one or more target bridges %s does not"
+                                   " exist on destination node '%s'" %
+                                   (brlist, instance.primary_node))
+
+    self.instance = instance
+    self.op.instance_name = instance.name
+
+  def Exec(self, feedback_fn):
+    """Start the instance.
+
+    """
+    instance = self.instance
+    force = self.op.force
+    extra_args = getattr(self.op, "extra_args", "")
+
+    node_current = instance.primary_node
+
+    nodeinfo = rpc.call_node_info([node_current], self.cfg.GetVGName())
+    if not nodeinfo:
+      raise errors.OpExecError, ("Could not contact node %s for infos" %
+                                 (node_current))
+
+    freememory = nodeinfo[node_current]['memory_free']
+    memory = instance.memory
+    if memory > freememory:
+      raise errors.OpExecError, ("Not enough memory to start instance"
+                                 " %s on node %s"
+                                 " needed %s MiB, available %s MiB" %
+                                 (instance.name, node_current, memory,
+                                  freememory))
+
+    disks_ok, dummy = _AssembleInstanceDisks(instance, self.cfg,
+                                             ignore_secondaries=force)
+    if not disks_ok:
+      _ShutdownInstanceDisks(instance, self.cfg)
+      if not force:
+        logger.Error("If the message above refers to a secondary node,"
+                     " you can retry the operation using '--force'.")
+      raise errors.OpExecError, ("Disk consistency error")
+
+    if not rpc.call_instance_start(node_current, instance, extra_args):
+      _ShutdownInstanceDisks(instance, self.cfg)
+      raise errors.OpExecError, ("Could not start instance")
+
+    self.cfg.MarkInstanceUp(instance.name)
+
+
+class LUShutdownInstance(LogicalUnit):
+  """Shutdown an instance.
+
+  """
+  HPATH = "instance-stop"
+  HTYPE = constants.HTYPE_INSTANCE
+  _OP_REQP = ["instance_name"]
+
+  def BuildHooksEnv(self):
+    """Build hooks env.
+
+    This runs on master, primary and secondary nodes of the instance.
+
+    """
+    env = {
+      "INSTANCE_NAME": self.op.instance_name,
+      "INSTANCE_PRIMARY": self.instance.primary_node,
+      "INSTANCE_SECONDARIES": " ".join(self.instance.secondary_nodes),
+      }
+    nl = ([self.cfg.GetMaster(), self.instance.primary_node] +
+          list(self.instance.secondary_nodes))
+    return env, nl, nl
+
+  def CheckPrereq(self):
+    """Check prerequisites.
+
+    This checks that the instance is in the cluster.
+
+    """
+    instance = self.cfg.GetInstanceInfo(
+      self.cfg.ExpandInstanceName(self.op.instance_name))
+    if instance is None:
+      raise errors.OpPrereqError, ("Instance '%s' not known" %
+                                   self.op.instance_name)
+    self.instance = instance
+
+  def Exec(self, feedback_fn):
+    """Shutdown the instance.
+
+    """
+    instance = self.instance
+    node_current = instance.primary_node
+    if not rpc.call_instance_shutdown(node_current, instance):
+      logger.Error("could not shutdown instance")
+
+    self.cfg.MarkInstanceDown(instance.name)
+    _ShutdownInstanceDisks(instance, self.cfg)
+
+
+class LURemoveInstance(LogicalUnit):
+  """Remove an instance.
+
+  """
+  HPATH = "instance-remove"
+  HTYPE = constants.HTYPE_INSTANCE
+  _OP_REQP = ["instance_name"]
+
+  def BuildHooksEnv(self):
+    """Build hooks env.
+
+    This runs on master, primary and secondary nodes of the instance.
+
+    """
+    env = {
+      "INSTANCE_NAME": self.op.instance_name,
+      "INSTANCE_PRIMARY": self.instance.primary_node,
+      "INSTANCE_SECONDARIES": " ".join(self.instance.secondary_nodes),
+      }
+    nl = ([self.cfg.GetMaster(), self.instance.primary_node] +
+          list(self.instance.secondary_nodes))
+    return env, nl, nl
+
+  def CheckPrereq(self):
+    """Check prerequisites.
+
+    This checks that the instance is in the cluster.
+
+    """
+    instance = self.cfg.GetInstanceInfo(
+      self.cfg.ExpandInstanceName(self.op.instance_name))
+    if instance is None:
+      raise errors.OpPrereqError, ("Instance '%s' not known" %
+                                   self.op.instance_name)
+    self.instance = instance
+
+  def Exec(self, feedback_fn):
+    """Remove the instance.
+
+    """
+    instance = self.instance
+    logger.Info("shutting down instance %s on node %s" %
+                (instance.name, instance.primary_node))
+
+    if not rpc.call_instance_shutdown(instance.primary_node, instance):
+      raise errors.OpExecError, ("Could not shutdown instance %s on node %s" %
+                                 (instance.name, instance.primary_node))
+
+    logger.Info("removing block devices for instance %s" % instance.name)
+
+    _RemoveDisks(instance, self.cfg)
+
+    logger.Info("removing instance %s out of cluster config" % instance.name)
+
+    self.cfg.RemoveInstance(instance.name)
+
+
+class LUQueryInstances(NoHooksLU):
+  """Logical unit for querying instances.
+
+  """
+  OP_REQP = ["output_fields"]
+
+  def CheckPrereq(self):
+    """Check prerequisites.
+
+    This checks that the fields required are valid output fields.
+
+    """
+
+    self.static_fields = frozenset(["name", "os", "pnode", "snodes",
+                                    "admin_state", "admin_ram",
+                                    "disk_template", "ip", "mac", "bridge"])
+    self.dynamic_fields = frozenset(["oper_state", "oper_ram"])
+    self.all_fields = self.static_fields | self.dynamic_fields
+
+    if not self.all_fields.issuperset(self.op.output_fields):
+      raise errors.OpPrereqError, ("Unknown output fields selected: %s"
+                                   % ",".join(frozenset(self.op.output_fields).
+                                              difference(self.all_fields)))
+
+  def Exec(self, feedback_fn):
+    """Computes the list of nodes and their attributes.
+
+    """
+
+    instance_names = utils.NiceSort(self.cfg.GetInstanceList())
+    instance_list = [self.cfg.GetInstanceInfo(iname) for iname
+                     in instance_names]
+
+    # begin data gathering
+
+    nodes = frozenset([inst.primary_node for inst in instance_list])
+
+    bad_nodes = []
+    if self.dynamic_fields.intersection(self.op.output_fields):
+      live_data = {}
+      node_data = rpc.call_all_instances_info(nodes)
+      for name in nodes:
+        result = node_data[name]
+        if result:
+          live_data.update(result)
+        elif result == False:
+          bad_nodes.append(name)
+        # else no instance is alive
+    else:
+      live_data = dict([(name, {}) for name in instance_names])
+
+    # end data gathering
+
+    output = []
+    for instance in instance_list:
+      iout = []
+      for field in self.op.output_fields:
+        if field == "name":
+          val = instance.name
+        elif field == "os":
+          val = instance.os
+        elif field == "pnode":
+          val = instance.primary_node
+        elif field == "snodes":
+          val = ",".join(instance.secondary_nodes) or "-"
+        elif field == "admin_state":
+          if instance.status == "down":
+            val = "no"
+          else:
+            val = "yes"
+        elif field == "oper_state":
+          if instance.primary_node in bad_nodes:
+            val = "(node down)"
+          else:
+            if live_data.get(instance.name):
+              val = "running"
+            else:
+              val = "stopped"
+        elif field == "admin_ram":
+          val = instance.memory
+        elif field == "oper_ram":
+          if instance.primary_node in bad_nodes:
+            val = "(node down)"
+          elif instance.name in live_data:
+            val = live_data[instance.name].get("memory", "?")
+          else:
+            val = "-"
+        elif field == "disk_template":
+          val = instance.disk_template
+        elif field == "ip":
+          val = instance.nics[0].ip
+        elif field == "bridge":
+          val = instance.nics[0].bridge
+        elif field == "mac":
+          val = instance.nics[0].mac
+        else:
+          raise errors.ParameterError, field
+        val = str(val)
+        iout.append(val)
+      output.append(iout)
+
+    return output
+
+
+class LUFailoverInstance(LogicalUnit):
+  """Failover an instance.
+
+  """
+  HPATH = "instance-failover"
+  HTYPE = constants.HTYPE_INSTANCE
+  _OP_REQP = ["instance_name", "ignore_consistency"]
+
+  def BuildHooksEnv(self):
+    """Build hooks env.
+
+    This runs on master, primary and secondary nodes of the instance.
+
+    """
+    env = {
+      "INSTANCE_NAME": self.op.instance_name,
+      "INSTANCE_PRIMARY": self.instance.primary_node,
+      "INSTANCE_SECONDARIES": " ".join(self.instance.secondary_nodes),
+      "IGNORE_CONSISTENCY": self.op.ignore_consistency,
+      }
+    nl = [self.cfg.GetMaster()] + list(self.instance.secondary_nodes)
+    return env, nl, nl
+
+  def CheckPrereq(self):
+    """Check prerequisites.
+
+    This checks that the instance is in the cluster.
+
+    """
+    instance = self.cfg.GetInstanceInfo(
+      self.cfg.ExpandInstanceName(self.op.instance_name))
+    if instance is None:
+      raise errors.OpPrereqError, ("Instance '%s' not known" %
+                                   self.op.instance_name)
+
+    # check bridge existance
+    brlist = [nic.bridge for nic in instance.nics]
+    if not rpc.call_bridges_exist(instance.primary_node, brlist):
+      raise errors.OpPrereqError, ("one or more target bridges %s does not"
+                                   " exist on destination node '%s'" %
+                                   (brlist, instance.primary_node))
+
+    self.instance = instance
+
+  def Exec(self, feedback_fn):
+    """Failover an instance.
+
+    The failover is done by shutting it down on its present node and
+    starting it on the secondary.
+
+    """
+    instance = self.instance
+
+    source_node = instance.primary_node
+    target_node = instance.secondary_nodes[0]
+
+    feedback_fn("* checking disk consistency between source and target")
+    for dev in instance.disks:
+      # for remote_raid1, these are md over drbd
+      if not _CheckDiskConsistency(self.cfg, dev, target_node, False):
+        if not self.op.ignore_consistency:
+          raise errors.OpExecError, ("Disk %s is degraded on target node,"
+                                     " aborting failover." % dev.iv_name)
+
+    feedback_fn("* checking target node resource availability")
+    nodeinfo = rpc.call_node_info([target_node], self.cfg.GetVGName())
+
+    if not nodeinfo:
+      raise errors.OpExecError, ("Could not contact target node %s." %
+                                 target_node)
+
+    free_memory = int(nodeinfo[target_node]['memory_free'])
+    memory = instance.memory
+    if memory > free_memory:
+      raise errors.OpExecError, ("Not enough memory to create instance %s on"
+                                 " node %s. needed %s MiB, available %s MiB" %
+                                 (instance.name, target_node, memory,
+                                  free_memory))
+
+    feedback_fn("* shutting down instance on source node")
+    logger.Info("Shutting down instance %s on node %s" %
+                (instance.name, source_node))
+
+    if not rpc.call_instance_shutdown(source_node, instance):
+      logger.Error("Could not shutdown instance %s on node %s. Proceeding"
+                   " anyway. Please make sure node %s is down"  %
+                   (instance.name, source_node, source_node))
+
+    feedback_fn("* deactivating the instance's disks on source node")
+    if not _ShutdownInstanceDisks(instance, self.cfg, ignore_primary=True):
+      raise errors.OpExecError, ("Can't shut down the instance's disks.")
+
+    instance.primary_node = target_node
+    # distribute new instance config to the other nodes
+    self.cfg.AddInstance(instance)
+
+    feedback_fn("* activating the instance's disks on target node")
+    logger.Info("Starting instance %s on node %s" %
+                (instance.name, target_node))
+
+    disks_ok, dummy = _AssembleInstanceDisks(instance, self.cfg,
+                                             ignore_secondaries=True)
+    if not disks_ok:
+      _ShutdownInstanceDisks(instance, self.cfg)
+      raise errors.OpExecError, ("Can't activate the instance's disks")
+
+    feedback_fn("* starting the instance on the target node")
+    if not rpc.call_instance_start(target_node, instance, None):
+      _ShutdownInstanceDisks(instance, self.cfg)
+      raise errors.OpExecError("Could not start instance %s on node %s." %
+                               (instance, target_node))
+
+
+def _CreateBlockDevOnPrimary(cfg, node, device):
+  """Create a tree of block devices on the primary node.
+
+  This always creates all devices.
+
+  """
+
+  if device.children:
+    for child in device.children:
+      if not _CreateBlockDevOnPrimary(cfg, node, child):
+        return False
+
+  cfg.SetDiskID(device, node)
+  new_id = rpc.call_blockdev_create(node, device, device.size, True)
+  if not new_id:
+    return False
+  if device.physical_id is None:
+    device.physical_id = new_id
+  return True
+
+
+def _CreateBlockDevOnSecondary(cfg, node, device, force):
+  """Create a tree of block devices on a secondary node.
+
+  If this device type has to be created on secondaries, create it and
+  all its children.
+
+  If not, just recurse to children keeping the same 'force' value.
+
+  """
+  if device.CreateOnSecondary():
+    force = True
+  if device.children:
+    for child in device.children:
+      if not _CreateBlockDevOnSecondary(cfg, node, child, force):
+        return False
+
+  if not force:
+    return True
+  cfg.SetDiskID(device, node)
+  new_id = rpc.call_blockdev_create(node, device, device.size, False)
+  if not new_id:
+    return False
+  if device.physical_id is None:
+    device.physical_id = new_id
+  return True
+
+
+def _GenerateMDDRBDBranch(cfg, vgname, primary, secondary, size, base):
+  """Generate a drbd device complete with its children.
+
+  """
+  port = cfg.AllocatePort()
+  base = "%s_%s" % (base, port)
+  dev_data = objects.Disk(dev_type="lvm", size=size,
+                          logical_id=(vgname, "%s.data" % base))
+  dev_meta = objects.Disk(dev_type="lvm", size=128,
+                          logical_id=(vgname, "%s.meta" % base))
+  drbd_dev = objects.Disk(dev_type="drbd", size=size,
+                          logical_id = (primary, secondary, port),
+                          children = [dev_data, dev_meta])
+  return drbd_dev
+
+
+def _GenerateDiskTemplate(cfg, vgname, template_name,
+                          instance_name, primary_node,
+                          secondary_nodes, disk_sz, swap_sz):
+  """Generate the entire disk layout for a given template type.
+
+  """
+  #TODO: compute space requirements
+
+  if template_name == "diskless":
+    disks = []
+  elif template_name == "plain":
+    if len(secondary_nodes) != 0:
+      raise errors.ProgrammerError("Wrong template configuration")
+    sda_dev = objects.Disk(dev_type="lvm", size=disk_sz,
+                           logical_id=(vgname, "%s.os" % instance_name),
+                           iv_name = "sda")
+    sdb_dev = objects.Disk(dev_type="lvm", size=swap_sz,
+                           logical_id=(vgname, "%s.swap" % instance_name),
+                           iv_name = "sdb")
+    disks = [sda_dev, sdb_dev]
+  elif template_name == "local_raid1":
+    if len(secondary_nodes) != 0:
+      raise errors.ProgrammerError("Wrong template configuration")
+    sda_dev_m1 = objects.Disk(dev_type="lvm", size=disk_sz,
+                              logical_id=(vgname, "%s.os_m1" % instance_name))
+    sda_dev_m2 = objects.Disk(dev_type="lvm", size=disk_sz,
+                              logical_id=(vgname, "%s.os_m2" % instance_name))
+    md_sda_dev = objects.Disk(dev_type="md_raid1", iv_name = "sda",
+                              size=disk_sz,
+                              children = [sda_dev_m1, sda_dev_m2])
+    sdb_dev_m1 = objects.Disk(dev_type="lvm", size=swap_sz,
+                              logical_id=(vgname, "%s.swap_m1" %
+                                          instance_name))
+    sdb_dev_m2 = objects.Disk(dev_type="lvm", size=swap_sz,
+                              logical_id=(vgname, "%s.swap_m2" %
+                                          instance_name))
+    md_sdb_dev = objects.Disk(dev_type="md_raid1", iv_name = "sdb",
+                              size=swap_sz,
+                              children = [sdb_dev_m1, sdb_dev_m2])
+    disks = [md_sda_dev, md_sdb_dev]
+  elif template_name == "remote_raid1":
+    if len(secondary_nodes) != 1:
+      raise errors.ProgrammerError("Wrong template configuration")
+    remote_node = secondary_nodes[0]
+    drbd_sda_dev = _GenerateMDDRBDBranch(cfg, vgname,
+                                         primary_node, remote_node, disk_sz,
+                                         "%s-sda" % instance_name)
+    md_sda_dev = objects.Disk(dev_type="md_raid1", iv_name="sda",
+                              children = [drbd_sda_dev], size=disk_sz)
+    drbd_sdb_dev = _GenerateMDDRBDBranch(cfg, vgname,
+                                         primary_node, remote_node, swap_sz,
+                                         "%s-sdb" % instance_name)
+    md_sdb_dev = objects.Disk(dev_type="md_raid1", iv_name="sdb",
+                              children = [drbd_sdb_dev], size=swap_sz)
+    disks = [md_sda_dev, md_sdb_dev]
+  else:
+    raise errors.ProgrammerError("Invalid disk template '%s'" % template_name)
+  return disks
+
+
+def _CreateDisks(cfg, instance):
+  """Create all disks for an instance.
+
+  This abstracts away some work from AddInstance.
+
+  Args:
+    instance: the instance object
+
+  Returns:
+    True or False showing the success of the creation process
+
+  """
+  for device in instance.disks:
+    logger.Info("creating volume %s for instance %s" %
+              (device.iv_name, instance.name))
+    #HARDCODE
+    for secondary_node in instance.secondary_nodes:
+      if not _CreateBlockDevOnSecondary(cfg, secondary_node, device, False):
+        logger.Error("failed to create volume %s (%s) on secondary node %s!" %
+                     (device.iv_name, device, secondary_node))
+        return False
+    #HARDCODE
+    if not _CreateBlockDevOnPrimary(cfg, instance.primary_node, device):
+      logger.Error("failed to create volume %s on primary!" %
+                   device.iv_name)
+      return False
+  return True
+
+
+def _RemoveDisks(instance, cfg):
+  """Remove all disks for an instance.
+
+  This abstracts away some work from `AddInstance()` and
+  `RemoveInstance()`. Note that in case some of the devices couldn't
+  be remove, the removal will continue with the other ones (compare
+  with `_CreateDisks()`).
+
+  Args:
+    instance: the instance object
+
+  Returns:
+    True or False showing the success of the removal proces
+
+  """
+  logger.Info("removing block devices for instance %s" % instance.name)
+
+  result = True
+  for device in instance.disks:
+    for node, disk in device.ComputeNodeTree(instance.primary_node):
+      cfg.SetDiskID(disk, node)
+      if not rpc.call_blockdev_remove(node, disk):
+        logger.Error("could not remove block device %s on node %s,"
+                     " continuing anyway" %
+                     (device.iv_name, node))
+        result = False
+  return result
+
+
+class LUCreateInstance(LogicalUnit):
+  """Create an instance.
+
+  """
+  HPATH = "instance-add"
+  HTYPE = constants.HTYPE_INSTANCE
+  _OP_REQP = ["instance_name", "mem_size", "disk_size", "pnode",
+              "disk_template", "swap_size", "mode", "start", "vcpus",
+              "wait_for_sync"]
+
+  def BuildHooksEnv(self):
+    """Build hooks env.
+
+    This runs on master, primary and secondary nodes of the instance.
+
+    """
+    env = {
+      "INSTANCE_NAME": self.op.instance_name,
+      "INSTANCE_PRIMARY": self.op.pnode,
+      "INSTANCE_SECONDARIES": " ".join(self.secondaries),
+      "DISK_TEMPLATE": self.op.disk_template,
+      "MEM_SIZE": self.op.mem_size,
+      "DISK_SIZE": self.op.disk_size,
+      "SWAP_SIZE": self.op.swap_size,
+      "VCPUS": self.op.vcpus,
+      "BRIDGE": self.op.bridge,
+      "INSTANCE_ADD_MODE": self.op.mode,
+      }
+    if self.op.mode == constants.INSTANCE_IMPORT:
+      env["SRC_NODE"] = self.op.src_node
+      env["SRC_PATH"] = self.op.src_path
+      env["SRC_IMAGE"] = self.src_image
+    if self.inst_ip:
+      env["INSTANCE_IP"] = self.inst_ip
+
+    nl = ([self.cfg.GetMaster(), self.op.pnode] +
+          self.secondaries)
+    return env, nl, nl
+
+
+  def CheckPrereq(self):
+    """Check prerequisites.
+
+    """
+    if self.op.mode not in (constants.INSTANCE_CREATE,
+                            constants.INSTANCE_IMPORT):
+      raise errors.OpPrereqError, ("Invalid instance creation mode '%s'" %
+                                   self.op.mode)
+
+    if self.op.mode == constants.INSTANCE_IMPORT:
+      src_node = getattr(self.op, "src_node", None)
+      src_path = getattr(self.op, "src_path", None)
+      if src_node is None or src_path is None:
+        raise errors.OpPrereqError, ("Importing an instance requires source"
+                                     " node and path options")
+      src_node_full = self.cfg.ExpandNodeName(src_node)
+      if src_node_full is None:
+        raise errors.OpPrereqError, ("Unknown source node '%s'" % src_node)
+      self.op.src_node = src_node = src_node_full
+
+      if not os.path.isabs(src_path):
+        raise errors.OpPrereqError, ("The source path must be absolute")
+
+      export_info = rpc.call_export_info(src_node, src_path)
+
+      if not export_info:
+        raise errors.OpPrereqError, ("No export found in dir %s" % src_path)
+
+      if not export_info.has_section(constants.INISECT_EXP):
+        raise errors.ProgrammerError, ("Corrupted export config")
+
+      ei_version = export_info.get(constants.INISECT_EXP, 'version')
+      if (int(ei_version) != constants.EXPORT_VERSION):
+        raise errors.OpPrereqError, ("Wrong export version %s (wanted %d)" %
+                                     (ei_version, constants.EXPORT_VERSION))
+
+      if int(export_info.get(constants.INISECT_INS, 'disk_count')) > 1:
+        raise errors.OpPrereqError, ("Can't import instance with more than"
+                                     " one data disk")
+
+      # FIXME: are the old os-es, disk sizes, etc. useful?
+      self.op.os_type = export_info.get(constants.INISECT_EXP, 'os')
+      diskimage = os.path.join(src_path, export_info.get(constants.INISECT_INS,
+                                                         'disk0_dump'))
+      self.src_image = diskimage
+    else: # INSTANCE_CREATE
+      if getattr(self.op, "os_type", None) is None:
+        raise errors.OpPrereqError, ("No guest OS specified")
+
+    # check primary node
+    pnode = self.cfg.GetNodeInfo(self.cfg.ExpandNodeName(self.op.pnode))
+    if pnode is None:
+      raise errors.OpPrereqError, ("Primary node '%s' is uknown" %
+                                   self.op.pnode)
+    self.op.pnode = pnode.name
+    self.pnode = pnode
+    self.secondaries = []
+    # disk template and mirror node verification
+    if self.op.disk_template not in constants.DISK_TEMPLATES:
+      raise errors.OpPrereqError, ("Invalid disk template name")
+
+    if self.op.disk_template == constants.DT_REMOTE_RAID1:
+      if getattr(self.op, "snode", None) is None:
+        raise errors.OpPrereqError, ("The 'remote_raid1' disk template needs"
+                                     " a mirror node")
+
+      snode_name = self.cfg.ExpandNodeName(self.op.snode)
+      if snode_name is None:
+        raise errors.OpPrereqError, ("Unknown secondary node '%s'" %
+                                     self.op.snode)
+      elif snode_name == pnode.name:
+        raise errors.OpPrereqError, ("The secondary node cannot be"
+                                     " the primary node.")
+      self.secondaries.append(snode_name)
+
+    # os verification
+    os_obj = rpc.call_os_get([pnode.name], self.op.os_type)[pnode.name]
+    if not isinstance(os_obj, objects.OS):
+      raise errors.OpPrereqError, ("OS '%s' not in supported os list for"
+                                   " primary node"  % self.op.os_type)
+
+    # instance verification
+    hostname1 = utils.LookupHostname(self.op.instance_name)
+    if not hostname1:
+      raise errors.OpPrereqError, ("Instance name '%s' not found in dns" %
+                                   self.op.instance_name)
+
+    self.op.instance_name = instance_name = hostname1['hostname']
+    instance_list = self.cfg.GetInstanceList()
+    if instance_name in instance_list:
+      raise errors.OpPrereqError, ("Instance '%s' is already in the cluster" %
+                                   instance_name)
+
+    ip = getattr(self.op, "ip", None)
+    if ip is None or ip.lower() == "none":
+      inst_ip = None
+    elif ip.lower() == "auto":
+      inst_ip = hostname1['ip']
+    else:
+      if not utils.IsValidIP(ip):
+        raise errors.OpPrereqError, ("given IP address '%s' doesn't look"
+                                     " like a valid IP" % ip)
+      inst_ip = ip
+    self.inst_ip = inst_ip
+
+    command = ["fping", "-q", hostname1['ip']]
+    result = utils.RunCmd(command)
+    if not result.failed:
+      raise errors.OpPrereqError, ("IP %s of instance %s already in use" %
+                                   (hostname1['ip'], instance_name))
+
+    # bridge verification
+    bridge = getattr(self.op, "bridge", None)
+    if bridge is None:
+      self.op.bridge = self.cfg.GetDefBridge()
+    else:
+      self.op.bridge = bridge
+
+    if not rpc.call_bridges_exist(self.pnode.name, [self.op.bridge]):
+      raise errors.OpPrereqError, ("target bridge '%s' does not exist on"
+                                   " destination node '%s'" %
+                                   (self.op.bridge, pnode.name))
+
+    if self.op.start:
+      self.instance_status = 'up'
+    else:
+      self.instance_status = 'down'
+
+  def Exec(self, feedback_fn):
+    """Create and add the instance to the cluster.
+
+    """
+    instance = self.op.instance_name
+    pnode_name = self.pnode.name
+
+    nic = objects.NIC(bridge=self.op.bridge, mac=self.cfg.GenerateMAC())
+    if self.inst_ip is not None:
+      nic.ip = self.inst_ip
+
+    disks = _GenerateDiskTemplate(self.cfg, self.cfg.GetVGName(),
+                                  self.op.disk_template,
+                                  instance, pnode_name,
+                                  self.secondaries, self.op.disk_size,
+                                  self.op.swap_size)
+
+    iobj = objects.Instance(name=instance, os=self.op.os_type,
+                            primary_node=pnode_name,
+                            memory=self.op.mem_size,
+                            vcpus=self.op.vcpus,
+                            nics=[nic], disks=disks,
+                            disk_template=self.op.disk_template,
+                            status=self.instance_status,
+                            )
+
+    feedback_fn("* creating instance disks...")
+    if not _CreateDisks(self.cfg, iobj):
+      _RemoveDisks(iobj, self.cfg)
+      raise errors.OpExecError, ("Device creation failed, reverting...")
+
+    feedback_fn("adding instance %s to cluster config" % instance)
+
+    self.cfg.AddInstance(iobj)
+
+    if self.op.wait_for_sync:
+      disk_abort = not _WaitForSync(self.cfg, iobj)
+    elif iobj.disk_template == "remote_raid1":
+      # make sure the disks are not degraded (still sync-ing is ok)
+      time.sleep(15)
+      feedback_fn("* checking mirrors status")
+      disk_abort = not _WaitForSync(self.cfg, iobj, oneshot=True)
+    else:
+      disk_abort = False
+
+    if disk_abort:
+      _RemoveDisks(iobj, self.cfg)
+      self.cfg.RemoveInstance(iobj.name)
+      raise errors.OpExecError, ("There are some degraded disks for"
+                                      " this instance")
+
+    feedback_fn("creating os for instance %s on node %s" %
+                (instance, pnode_name))
+
+    if iobj.disk_template != constants.DT_DISKLESS:
+      if self.op.mode == constants.INSTANCE_CREATE:
+        feedback_fn("* running the instance OS create scripts...")
+        if not rpc.call_instance_os_add(pnode_name, iobj, "sda", "sdb"):
+          raise errors.OpExecError, ("could not add os for instance %s"
+                                          " on node %s" %
+                                          (instance, pnode_name))
+
+      elif self.op.mode == constants.INSTANCE_IMPORT:
+        feedback_fn("* running the instance OS import scripts...")
+        src_node = self.op.src_node
+        src_image = self.src_image
+        if not rpc.call_instance_os_import(pnode_name, iobj, "sda", "sdb",
+                                                src_node, src_image):
+          raise errors.OpExecError, ("Could not import os for instance"
+                                          " %s on node %s" %
+                                          (instance, pnode_name))
+      else:
+        # also checked in the prereq part
+        raise errors.ProgrammerError, ("Unknown OS initialization mode '%s'"
+                                       % self.op.mode)
+
+    if self.op.start:
+      logger.Info("starting instance %s on node %s" % (instance, pnode_name))
+      feedback_fn("* starting instance...")
+      if not rpc.call_instance_start(pnode_name, iobj, None):
+        raise errors.OpExecError, ("Could not start instance")
+
+
+class LUConnectConsole(NoHooksLU):
+  """Connect to an instance's console.
+
+  This is somewhat special in that it returns the command line that
+  you need to run on the master node in order to connect to the
+  console.
+
+  """
+  _OP_REQP = ["instance_name"]
+
+  def CheckPrereq(self):
+    """Check prerequisites.
+
+    This checks that the instance is in the cluster.
+
+    """
+    instance = self.cfg.GetInstanceInfo(
+      self.cfg.ExpandInstanceName(self.op.instance_name))
+    if instance is None:
+      raise errors.OpPrereqError, ("Instance '%s' not known" %
+                                   self.op.instance_name)
+    self.instance = instance
+
+  def Exec(self, feedback_fn):
+    """Connect to the console of an instance
+
+    """
+    instance = self.instance
+    node = instance.primary_node
+
+    node_insts = rpc.call_instance_list([node])[node]
+    if node_insts is False:
+      raise errors.OpExecError, ("Can't connect to node %s." % node)
+
+    if instance.name not in node_insts:
+      raise errors.OpExecError, ("Instance %s is not running." % instance.name)
+
+    logger.Debug("connecting to console of %s on %s" % (instance.name, node))
+
+    hyper = hypervisor.GetHypervisor()
+    console_cmd = hyper.GetShellCommandForConsole(instance.name)
+    return node, console_cmd
+
+
+class LUAddMDDRBDComponent(LogicalUnit):
+  """Adda new mirror member to an instance's disk.
+
+  """
+  HPATH = "mirror-add"
+  HTYPE = constants.HTYPE_INSTANCE
+  _OP_REQP = ["instance_name", "remote_node", "disk_name"]
+
+  def BuildHooksEnv(self):
+    """Build hooks env.
+
+    This runs on the master, the primary and all the secondaries.
+
+    """
+    env = {
+      "INSTANCE_NAME": self.op.instance_name,
+      "NEW_SECONDARY": self.op.remote_node,
+      "DISK_NAME": self.op.disk_name,
+      }
+    nl = [self.cfg.GetMaster(), self.instance.primary_node,
+          self.op.remote_node,] + list(self.instance.secondary_nodes)
+    return env, nl, nl
+
+  def CheckPrereq(self):
+    """Check prerequisites.
+
+    This checks that the instance is in the cluster.
+
+    """
+    instance = self.cfg.GetInstanceInfo(
+      self.cfg.ExpandInstanceName(self.op.instance_name))
+    if instance is None:
+      raise errors.OpPrereqError, ("Instance '%s' not known" %
+                                   self.op.instance_name)
+    self.instance = instance
+
+    remote_node = self.cfg.ExpandNodeName(self.op.remote_node)
+    if remote_node is None:
+      raise errors.OpPrereqError, ("Node '%s' not known" % self.op.remote_node)
+    self.remote_node = remote_node
+
+    if remote_node == instance.primary_node:
+      raise errors.OpPrereqError, ("The specified node is the primary node of"
+                                   " the instance.")
+
+    if instance.disk_template != constants.DT_REMOTE_RAID1:
+      raise errors.OpPrereqError, ("Instance's disk layout is not"
+                                   " remote_raid1.")
+    for disk in instance.disks:
+      if disk.iv_name == self.op.disk_name:
+        break
+    else:
+      raise errors.OpPrereqError, ("Can't find this device ('%s') in the"
+                                   " instance." % self.op.disk_name)
+    if len(disk.children) > 1:
+      raise errors.OpPrereqError, ("The device already has two slave"
+                                   " devices.\n"
+                                   "This would create a 3-disk raid1"
+                                   " which we don't allow.")
+    self.disk = disk
+
+  def Exec(self, feedback_fn):
+    """Add the mirror component
+
+    """
+    disk = self.disk
+    instance = self.instance
+
+    remote_node = self.remote_node
+    new_drbd = _GenerateMDDRBDBranch(self.cfg, instance.primary_node,
+                                     remote_node, disk.size, "%s-%s" %
+                                     (instance.name, self.op.disk_name))
+
+    logger.Info("adding new mirror component on secondary")
+    #HARDCODE
+    if not _CreateBlockDevOnSecondary(self.cfg, remote_node, new_drbd, False):
+      raise errors.OpExecError, ("Failed to create new component on secondary"
+                                 " node %s" % remote_node)
+
+    logger.Info("adding new mirror component on primary")
+    #HARDCODE
+    if not _CreateBlockDevOnPrimary(self.cfg, instance.primary_node, new_drbd):
+      # remove secondary dev
+      self.cfg.SetDiskID(new_drbd, remote_node)
+      rpc.call_blockdev_remove(remote_node, new_drbd)
+      raise errors.OpExecError, ("Failed to create volume on primary")
+
+    # the device exists now
+    # call the primary node to add the mirror to md
+    logger.Info("adding new mirror component to md")
+    if not rpc.call_blockdev_addchild(instance.primary_node,
+                                           disk, new_drbd):
+      logger.Error("Can't add mirror compoment to md!")
+      self.cfg.SetDiskID(new_drbd, remote_node)
+      if not rpc.call_blockdev_remove(remote_node, new_drbd):
+        logger.Error("Can't rollback on secondary")
+      self.cfg.SetDiskID(new_drbd, instance.primary_node)
+      if not rpc.call_blockdev_remove(instance.primary_node, new_drbd):
+        logger.Error("Can't rollback on primary")
+      raise errors.OpExecError, "Can't add mirror component to md array"
+
+    disk.children.append(new_drbd)
+
+    self.cfg.AddInstance(instance)
+
+    _WaitForSync(self.cfg, instance)
+
+    return 0
+
+
+class LURemoveMDDRBDComponent(LogicalUnit):
+  """Remove a component from a remote_raid1 disk.
+
+  """
+  HPATH = "mirror-remove"
+  HTYPE = constants.HTYPE_INSTANCE
+  _OP_REQP = ["instance_name", "disk_name", "disk_id"]
+
+  def BuildHooksEnv(self):
+    """Build hooks env.
+
+    This runs on the master, the primary and all the secondaries.
+
+    """
+    env = {
+      "INSTANCE_NAME": self.op.instance_name,
+      "DISK_NAME": self.op.disk_name,
+      "DISK_ID": self.op.disk_id,
+      "OLD_SECONDARY": self.old_secondary,
+      }
+    nl = [self.cfg.GetMaster(),
+          self.instance.primary_node] + list(self.instance.secondary_nodes)
+    return env, nl, nl
+
+  def CheckPrereq(self):
+    """Check prerequisites.
+
+    This checks that the instance is in the cluster.
+
+    """
+    instance = self.cfg.GetInstanceInfo(
+      self.cfg.ExpandInstanceName(self.op.instance_name))
+    if instance is None:
+      raise errors.OpPrereqError, ("Instance '%s' not known" %
+                                   self.op.instance_name)
+    self.instance = instance
+
+    if instance.disk_template != constants.DT_REMOTE_RAID1:
+      raise errors.OpPrereqError, ("Instance's disk layout is not"
+                                   " remote_raid1.")
+    for disk in instance.disks:
+      if disk.iv_name == self.op.disk_name:
+        break
+    else:
+      raise errors.OpPrereqError, ("Can't find this device ('%s') in the"
+                                   " instance." % self.op.disk_name)
+    for child in disk.children:
+      if child.dev_type == "drbd" and child.logical_id[2] == self.op.disk_id:
+        break
+    else:
+      raise errors.OpPrereqError, ("Can't find the device with this port.")
+
+    if len(disk.children) < 2:
+      raise errors.OpPrereqError, ("Cannot remove the last component from"
+                                   " a mirror.")
+    self.disk = disk
+    self.child = child
+    if self.child.logical_id[0] == instance.primary_node:
+      oid = 1
+    else:
+      oid = 0
+    self.old_secondary = self.child.logical_id[oid]
+
+  def Exec(self, feedback_fn):
+    """Remove the mirror component
+
+    """
+    instance = self.instance
+    disk = self.disk
+    child = self.child
+    logger.Info("remove mirror component")
+    self.cfg.SetDiskID(disk, instance.primary_node)
+    if not rpc.call_blockdev_removechild(instance.primary_node,
+                                              disk, child):
+      raise errors.OpExecError, ("Can't remove child from mirror.")
+
+    for node in child.logical_id[:2]:
+      self.cfg.SetDiskID(child, node)
+      if not rpc.call_blockdev_remove(node, child):
+        logger.Error("Warning: failed to remove device from node %s,"
+                     " continuing operation." % node)
+
+    disk.children.remove(child)
+    self.cfg.AddInstance(instance)
+
+
+class LUReplaceDisks(LogicalUnit):
+  """Replace the disks of an instance.
+
+  """
+  HPATH = "mirrors-replace"
+  HTYPE = constants.HTYPE_INSTANCE
+  _OP_REQP = ["instance_name"]
+
+  def BuildHooksEnv(self):
+    """Build hooks env.
+
+    This runs on the master, the primary and all the secondaries.
+
+    """
+    env = {
+      "INSTANCE_NAME": self.op.instance_name,
+      "NEW_SECONDARY": self.op.remote_node,
+      "OLD_SECONDARY": self.instance.secondary_nodes[0],
+      }
+    nl = [self.cfg.GetMaster(),
+          self.instance.primary_node] + list(self.instance.secondary_nodes)
+    return env, nl, nl
+
+  def CheckPrereq(self):
+    """Check prerequisites.
+
+    This checks that the instance is in the cluster.
+
+    """
+    instance = self.cfg.GetInstanceInfo(
+      self.cfg.ExpandInstanceName(self.op.instance_name))
+    if instance is None:
+      raise errors.OpPrereqError, ("Instance '%s' not known" %
+                                   self.op.instance_name)
+    self.instance = instance
+
+    if instance.disk_template != constants.DT_REMOTE_RAID1:
+      raise errors.OpPrereqError, ("Instance's disk layout is not"
+                                   " remote_raid1.")
+
+    if len(instance.secondary_nodes) != 1:
+      raise errors.OpPrereqError, ("The instance has a strange layout,"
+                                   " expected one secondary but found %d" %
+                                   len(instance.secondary_nodes))
+
+    remote_node = getattr(self.op, "remote_node", None)
+    if remote_node is None:
+      remote_node = instance.secondary_nodes[0]
+    else:
+      remote_node = self.cfg.ExpandNodeName(remote_node)
+      if remote_node is None:
+        raise errors.OpPrereqError, ("Node '%s' not known" %
+                                     self.op.remote_node)
+    if remote_node == instance.primary_node:
+      raise errors.OpPrereqError, ("The specified node is the primary node of"
+                                   " the instance.")
+    self.op.remote_node = remote_node
+
+  def Exec(self, feedback_fn):
+    """Replace the disks of an instance.
+
+    """
+    instance = self.instance
+    iv_names = {}
+    # start of work
+    remote_node = self.op.remote_node
+    cfg = self.cfg
+    for dev in instance.disks:
+      size = dev.size
+      new_drbd = _GenerateMDDRBDBranch(cfg, instance.primary_node,
+                                       remote_node, size,
+                                       "%s-%s" % (instance.name, dev.iv_name))
+      iv_names[dev.iv_name] = (dev, dev.children[0], new_drbd)
+      logger.Info("adding new mirror component on secondary for %s" %
+                  dev.iv_name)
+      #HARDCODE
+      if not _CreateBlockDevOnSecondary(cfg, remote_node, new_drbd, False):
+        raise errors.OpExecError, ("Failed to create new component on"
+                                   " secondary node %s\n"
+                                   "Full abort, cleanup manually!" %
+                                   remote_node)
+
+      logger.Info("adding new mirror component on primary")
+      #HARDCODE
+      if not _CreateBlockDevOnPrimary(cfg, instance.primary_node, new_drbd):
+        # remove secondary dev
+        cfg.SetDiskID(new_drbd, remote_node)
+        rpc.call_blockdev_remove(remote_node, new_drbd)
+        raise errors.OpExecError("Failed to create volume on primary!\n"
+                                 "Full abort, cleanup manually!!")
+
+      # the device exists now
+      # call the primary node to add the mirror to md
+      logger.Info("adding new mirror component to md")
+      if not rpc.call_blockdev_addchild(instance.primary_node, dev,
+                                             new_drbd):
+        logger.Error("Can't add mirror compoment to md!")
+        cfg.SetDiskID(new_drbd, remote_node)
+        if not rpc.call_blockdev_remove(remote_node, new_drbd):
+          logger.Error("Can't rollback on secondary")
+        cfg.SetDiskID(new_drbd, instance.primary_node)
+        if not rpc.call_blockdev_remove(instance.primary_node, new_drbd):
+          logger.Error("Can't rollback on primary")
+        raise errors.OpExecError, ("Full abort, cleanup manually!!")
+
+      dev.children.append(new_drbd)
+      cfg.AddInstance(instance)
+
+    # this can fail as the old devices are degraded and _WaitForSync
+    # does a combined result over all disks, so we don't check its
+    # return value
+    _WaitForSync(cfg, instance, unlock=True)
+
+    # so check manually all the devices
+    for name in iv_names:
+      dev, child, new_drbd = iv_names[name]
+      cfg.SetDiskID(dev, instance.primary_node)
+      is_degr = rpc.call_blockdev_find(instance.primary_node, dev)[5]
+      if is_degr:
+        raise errors.OpExecError, ("MD device %s is degraded!" % name)
+      cfg.SetDiskID(new_drbd, instance.primary_node)
+      is_degr = rpc.call_blockdev_find(instance.primary_node, new_drbd)[5]
+      if is_degr:
+        raise errors.OpExecError, ("New drbd device %s is degraded!" % name)
+
+    for name in iv_names:
+      dev, child, new_drbd = iv_names[name]
+      logger.Info("remove mirror %s component" % name)
+      cfg.SetDiskID(dev, instance.primary_node)
+      if not rpc.call_blockdev_removechild(instance.primary_node,
+                                                dev, child):
+        logger.Error("Can't remove child from mirror, aborting"
+                     " *this device cleanup*.\nYou need to cleanup manually!!")
+        continue
+
+      for node in child.logical_id[:2]:
+        logger.Info("remove child device on %s" % node)
+        cfg.SetDiskID(child, node)
+        if not rpc.call_blockdev_remove(node, child):
+          logger.Error("Warning: failed to remove device from node %s,"
+                       " continuing operation." % node)
+
+      dev.children.remove(child)
+
+      cfg.AddInstance(instance)
+
+
+class LUQueryInstanceData(NoHooksLU):
+  """Query runtime instance data.
+
+  """
+  _OP_REQP = ["instances"]
+
+  def CheckPrereq(self):
+    """Check prerequisites.
+
+    This only checks the optional instance list against the existing names.
+
+    """
+    if not isinstance(self.op.instances, list):
+      raise errors.OpPrereqError, "Invalid argument type 'instances'"
+    if self.op.instances:
+      self.wanted_instances = []
+      names = self.op.instances
+      for name in names:
+        instance = self.cfg.GetInstanceInfo(self.cfg.ExpandInstanceName(name))
+        if instance is None:
+          raise errors.OpPrereqError, ("No such instance name '%s'" % name)
+      self.wanted_instances.append(instance)
+    else:
+      self.wanted_instances = [self.cfg.GetInstanceInfo(name) for name
+                               in self.cfg.GetInstanceList()]
+    return
+
+
+  def _ComputeDiskStatus(self, instance, snode, dev):
+    """Compute block device status.
+
+    """
+    self.cfg.SetDiskID(dev, instance.primary_node)
+    dev_pstatus = rpc.call_blockdev_find(instance.primary_node, dev)
+    if dev.dev_type == "drbd":
+      # we change the snode then (otherwise we use the one passed in)
+      if dev.logical_id[0] == instance.primary_node:
+        snode = dev.logical_id[1]
+      else:
+        snode = dev.logical_id[0]
+
+    if snode:
+      self.cfg.SetDiskID(dev, snode)
+      dev_sstatus = rpc.call_blockdev_find(snode, dev)
+    else:
+      dev_sstatus = None
+
+    if dev.children:
+      dev_children = [self._ComputeDiskStatus(instance, snode, child)
+                      for child in dev.children]
+    else:
+      dev_children = []
+
+    data = {
+      "iv_name": dev.iv_name,
+      "dev_type": dev.dev_type,
+      "logical_id": dev.logical_id,
+      "physical_id": dev.physical_id,
+      "pstatus": dev_pstatus,
+      "sstatus": dev_sstatus,
+      "children": dev_children,
+      }
+
+    return data
+
+  def Exec(self, feedback_fn):
+    """Gather and return data"""
+
+    result = {}
+    for instance in self.wanted_instances:
+      remote_info = rpc.call_instance_info(instance.primary_node,
+                                                instance.name)
+      if remote_info and "state" in remote_info:
+        remote_state = "up"
+      else:
+        remote_state = "down"
+      if instance.status == "down":
+        config_state = "down"
+      else:
+        config_state = "up"
+
+      disks = [self._ComputeDiskStatus(instance, None, device)
+               for device in instance.disks]
+
+      idict = {
+        "name": instance.name,
+        "config_state": config_state,
+        "run_state": remote_state,
+        "pnode": instance.primary_node,
+        "snodes": instance.secondary_nodes,
+        "os": instance.os,
+        "memory": instance.memory,
+        "nics": [(nic.mac, nic.ip, nic.bridge) for nic in instance.nics],
+        "disks": disks,
+        }
+
+      result[instance.name] = idict
+
+    return result
+
+
+class LUQueryNodeData(NoHooksLU):
+  """Logical unit for querying node data.
+
+  """
+  _OP_REQP = ["nodes"]
+
+  def CheckPrereq(self):
+    """Check prerequisites.
+
+    This only checks the optional node list against the existing names.
+
+    """
+    if not isinstance(self.op.nodes, list):
+      raise errors.OpPrereqError, "Invalid argument type 'nodes'"
+    if self.op.nodes:
+      self.wanted_nodes = []
+      names = self.op.nodes
+      for name in names:
+        node = self.cfg.GetNodeInfo(self.cfg.ExpandNodeName(name))
+        if node is None:
+          raise errors.OpPrereqError, ("No such node name '%s'" % name)
+      self.wanted_nodes.append(node)
+    else:
+      self.wanted_nodes = [self.cfg.GetNodeInfo(name) for name
+                           in self.cfg.GetNodeList()]
+    return
+
+  def Exec(self, feedback_fn):
+    """Compute and return the list of nodes.
+
+    """
+
+    ilist = [self.cfg.GetInstanceInfo(iname) for iname
+             in self.cfg.GetInstanceList()]
+    result = []
+    for node in self.wanted_nodes:
+      result.append((node.name, node.primary_ip, node.secondary_ip,
+                     [inst.name for inst in ilist
+                      if inst.primary_node == node.name],
+                     [inst.name for inst in ilist
+                      if node.name in inst.secondary_nodes],
+                     ))
+    return result
+
+
+class LUSetInstanceParms(LogicalUnit):
+  """Modifies an instances's parameters.
+
+  """
+  HPATH = "instance-modify"
+  HTYPE = constants.HTYPE_INSTANCE
+  _OP_REQP = ["instance_name"]
+
+  def BuildHooksEnv(self):
+    """Build hooks env.
+
+    This runs on the master, primary and secondaries.
+
+    """
+    env = {
+      "INSTANCE_NAME": self.op.instance_name,
+      }
+    if self.mem:
+      env["MEM_SIZE"] = self.mem
+    if self.vcpus:
+      env["VCPUS"] = self.vcpus
+    if self.do_ip:
+      env["INSTANCE_IP"] = self.ip
+    if self.bridge:
+      env["BRIDGE"] = self.bridge
+
+    nl = [self.cfg.GetMaster(),
+          self.instance.primary_node] + list(self.instance.secondary_nodes)
+
+    return env, nl, nl
+
+  def CheckPrereq(self):
+    """Check prerequisites.
+
+    This only checks the instance list against the existing names.
+
+    """
+    self.mem = getattr(self.op, "mem", None)
+    self.vcpus = getattr(self.op, "vcpus", None)
+    self.ip = getattr(self.op, "ip", None)
+    self.bridge = getattr(self.op, "bridge", None)
+    if [self.mem, self.vcpus, self.ip, self.bridge].count(None) == 4:
+      raise errors.OpPrereqError, ("No changes submitted")
+    if self.mem is not None:
+      try:
+        self.mem = int(self.mem)
+      except ValueError, err:
+        raise errors.OpPrereqError, ("Invalid memory size: %s" % str(err))
+    if self.vcpus is not None:
+      try:
+        self.vcpus = int(self.vcpus)
+      except ValueError, err:
+        raise errors.OpPrereqError, ("Invalid vcpus number: %s" % str(err))
+    if self.ip is not None:
+      self.do_ip = True
+      if self.ip.lower() == "none":
+        self.ip = None
+      else:
+        if not utils.IsValidIP(self.ip):
+          raise errors.OpPrereqError, ("Invalid IP address '%s'." % self.ip)
+    else:
+      self.do_ip = False
+
+    instance = self.cfg.GetInstanceInfo(
+      self.cfg.ExpandInstanceName(self.op.instance_name))
+    if instance is None:
+      raise errors.OpPrereqError, ("No such instance name '%s'" %
+                                   self.op.instance_name)
+    self.op.instance_name = instance.name
+    self.instance = instance
+    return
+
+  def Exec(self, feedback_fn):
+    """Modifies an instance.
+
+    All parameters take effect only at the next restart of the instance.
+    """
+    result = []
+    instance = self.instance
+    if self.mem:
+      instance.memory = self.mem
+      result.append(("mem", self.mem))
+    if self.vcpus:
+      instance.vcpus = self.vcpus
+      result.append(("vcpus",  self.vcpus))
+    if self.do_ip:
+      instance.nics[0].ip = self.ip
+      result.append(("ip", self.ip))
+    if self.bridge:
+      instance.nics[0].bridge = self.bridge
+      result.append(("bridge", self.bridge))
+
+    self.cfg.AddInstance(instance)
+
+    return result
+
+
+class LUQueryExports(NoHooksLU):
+  """Query the exports list
+
+  """
+  _OP_REQP = []
+
+  def CheckPrereq(self):
+    """Check that the nodelist contains only existing nodes.
+
+    """
+    nodes = getattr(self.op, "nodes", None)
+    if not nodes:
+      self.op.nodes = self.cfg.GetNodeList()
+    else:
+      expnodes = [self.cfg.ExpandNodeName(node) for node in nodes]
+      if expnodes.count(None) > 0:
+        raise errors.OpPrereqError, ("At least one of the given nodes %s"
+                                     " is unknown" % self.op.nodes)
+      self.op.nodes = expnodes
+
+  def Exec(self, feedback_fn):
+
+    """Compute the list of all the exported system images.
+
+    Returns:
+      a dictionary with the structure node->(export-list)
+      where export-list is a list of the instances exported on
+      that node.
+
+    """
+    return rpc.call_export_list(self.op.nodes)
+
+
+class LUExportInstance(LogicalUnit):
+  """Export an instance to an image in the cluster.
+
+  """
+  HPATH = "instance-export"
+  HTYPE = constants.HTYPE_INSTANCE
+  _OP_REQP = ["instance_name", "target_node", "shutdown"]
+
+  def BuildHooksEnv(self):
+    """Build hooks env.
+
+    This will run on the master, primary node and target node.
+
+    """
+    env = {
+      "INSTANCE_NAME": self.op.instance_name,
+      "EXPORT_NODE": self.op.target_node,
+      "EXPORT_DO_SHUTDOWN": self.op.shutdown,
+      }
+    nl = [self.cfg.GetMaster(), self.instance.primary_node,
+          self.op.target_node]
+    return env, nl, nl
+
+  def CheckPrereq(self):
+    """Check prerequisites.
+
+    This checks that the instance name is a valid one.
+
+    """
+    instance_name = self.cfg.ExpandInstanceName(self.op.instance_name)
+    self.instance = self.cfg.GetInstanceInfo(instance_name)
+    if self.instance is None:
+      raise errors.OpPrereqError, ("Instance '%s' not found" %
+                                   self.op.instance_name)
+
+    # node verification
+    dst_node_short = self.cfg.ExpandNodeName(self.op.target_node)
+    self.dst_node = self.cfg.GetNodeInfo(dst_node_short)
+
+    if self.dst_node is None:
+      raise errors.OpPrereqError, ("Destination node '%s' is uknown." %
+                                   self.op.target_node)
+    self.op.target_node = self.dst_node.name
+
+  def Exec(self, feedback_fn):
+    """Export an instance to an image in the cluster.
+
+    """
+    instance = self.instance
+    dst_node = self.dst_node
+    src_node = instance.primary_node
+    # shutdown the instance, unless requested not to do so
+    if self.op.shutdown:
+      op = opcodes.OpShutdownInstance(instance_name=instance.name)
+      self.processor.ChainOpCode(op, feedback_fn)
+
+    vgname = self.cfg.GetVGName()
+
+    snap_disks = []
+
+    try:
+      for disk in instance.disks:
+        if disk.iv_name == "sda":
+          # new_dev_name will be a snapshot of an lvm leaf of the one we passed
+          new_dev_name = rpc.call_blockdev_snapshot(src_node, disk)
+
+          if not new_dev_name:
+            logger.Error("could not snapshot block device %s on node %s" %
+                         (disk.logical_id[1], src_node))
+          else:
+            new_dev = objects.Disk(dev_type="lvm", size=disk.size,
+                                      logical_id=(vgname, new_dev_name),
+                                      physical_id=(vgname, new_dev_name),
+                                      iv_name=disk.iv_name)
+            snap_disks.append(new_dev)
+
+    finally:
+      if self.op.shutdown:
+        op = opcodes.OpStartupInstance(instance_name=instance.name,
+                                       force=False)
+        self.processor.ChainOpCode(op, feedback_fn)
+
+    # TODO: check for size
+
+    for dev in snap_disks:
+      if not rpc.call_snapshot_export(src_node, dev, dst_node.name,
+                                           instance):
+        logger.Error("could not export block device %s from node"
+                     " %s to node %s" %
+                     (dev.logical_id[1], src_node, dst_node.name))
+      if not rpc.call_blockdev_remove(src_node, dev):
+        logger.Error("could not remove snapshot block device %s from"
+                     " node %s" % (dev.logical_id[1], src_node))
+
+    if not rpc.call_finalize_export(dst_node.name, instance, snap_disks):
+      logger.Error("could not finalize export for instance %s on node %s" %
+                   (instance.name, dst_node.name))
+
+    nodelist = self.cfg.GetNodeList()
+    nodelist.remove(dst_node.name)
+
+    # on one-node clusters nodelist will be empty after the removal
+    # if we proceed the backup would be removed because OpQueryExports
+    # substitutes an empty list with the full cluster node list.
+    if nodelist:
+      op = opcodes.OpQueryExports(nodes=nodelist)
+      exportlist = self.processor.ChainOpCode(op, feedback_fn)
+      for node in exportlist:
+        if instance.name in exportlist[node]:
+          if not rpc.call_export_remove(node, instance.name):
+            logger.Error("could not remove older export for instance %s"
+                         " on node %s" % (instance.name, node))
diff --git a/lib/config.py b/lib/config.py
new file mode 100644
index 0000000000000000000000000000000000000000..51af348f14ae9b40c2ba3d5882c25c4d9a1bc253
--- /dev/null
+++ b/lib/config.py
@@ -0,0 +1,540 @@
+#!/usr/bin/python
+#
+
+# Copyright (C) 2006, 2007 Google Inc.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful, but
+# WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+# General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
+# 02110-1301, USA.
+
+
+"""Configuration management for Ganeti
+
+This module provides the interface to the ganeti cluster configuration.
+
+
+The configuration data is stored on every node but is updated on the
+master only. After each update, the master distributes the data to the
+other nodes.
+
+Currently the data storage format is pickle as yaml was initially not
+available, then we used it but it was a memory-eating slow beast, so
+we reverted to pickle using custom Unpicklers.
+
+"""
+
+import os
+import socket
+import tempfile
+import random
+
+from ganeti import errors
+from ganeti import logger
+from ganeti import utils
+from ganeti import constants
+from ganeti import rpc
+from ganeti import objects
+
+
+class ConfigWriter:
+  """The interface to the cluster configuration"""
+
+  def __init__(self, cfg_file=None, offline=False):
+    self._config_data = None
+    self._config_time = None
+    self._offline = offline
+    if cfg_file is None:
+      self._cfg_file = constants.CLUSTER_CONF_FILE
+    else:
+      self._cfg_file = cfg_file
+
+  # this method needs to be static, so that we can call it on the class
+  @staticmethod
+  def IsCluster():
+    """Check if the cluster is configured.
+
+    """
+    return os.path.exists(constants.CLUSTER_CONF_FILE)
+
+  def GenerateMAC(self):
+    """Generate a MAC for an instance.
+
+    This should check the current instances for duplicates.
+
+    """
+    self._OpenConfig()
+    self._ReleaseLock()
+    prefix = self._config_data.cluster.mac_prefix
+    all_macs = self._AllMACs()
+    retries = 64
+    while retries > 0:
+      byte1 = random.randrange(0, 256)
+      byte2 = random.randrange(0, 256)
+      byte3 = random.randrange(0, 256)
+      mac = "%s:%02x:%02x:%02x" % (prefix, byte1, byte2, byte3)
+      if mac not in all_macs:
+        break
+      retries -= 1
+    else:
+      raise errors.ConfigurationError, ("Can't generate unique MAC")
+    return mac
+
+  def _AllMACs(self):
+    """Return all MACs present in the config.
+
+    """
+    self._OpenConfig()
+    self._ReleaseLock()
+
+    result = []
+    for instance in self._config_data.instances.values():
+      for nic in instance.nics:
+        result.append(nic.mac)
+
+    return result
+
+  def VerifyConfig(self):
+    """Stub verify function.
+    """
+    self._OpenConfig()
+    self._ReleaseLock()
+
+    result = []
+    seen_macs = []
+    data = self._config_data
+    for instance_name in data.instances:
+      instance = data.instances[instance_name]
+      if instance.primary_node not in data.nodes:
+        result.append("Instance '%s' has invalid primary node '%s'" %
+                      (instance_name, instance.primary_node))
+      for snode in instance.secondary_nodes:
+        if snode not in data.nodes:
+          result.append("Instance '%s' has invalid secondary node '%s'" %
+                        (instance_name, snode))
+      for idx, nic in enumerate(instance.nics):
+        if nic.mac in seen_macs:
+          result.append("Instance '%s' has NIC %d mac %s duplicate" %
+                        (instance_name, idx, nic.mac))
+        else:
+          seen_macs.append(nic.mac)
+    return result
+
+
+  def SetDiskID(self, disk, node_name):
+    """Convert the unique ID to the ID needed on the target nodes.
+
+    This is used only for drbd, which needs ip/port configuration.
+
+    The routine descends down and updates its children also, because
+    this helps when the only the top device is passed to the remote
+    node.
+
+    """
+    if disk.children:
+      for child in disk.children:
+        self.SetDiskID(child, node_name)
+
+    if disk.logical_id is None and disk.physical_id is not None:
+      return
+    if disk.dev_type == "drbd":
+      pnode, snode, port = disk.logical_id
+      if node_name not in (pnode, snode):
+        raise errors.ConfigurationError, ("DRBD device not knowing node %s" %
+                                          node_name)
+      pnode_info = self.GetNodeInfo(pnode)
+      snode_info = self.GetNodeInfo(snode)
+      if pnode_info is None or snode_info is None:
+        raise errors.ConfigurationError("Can't find primary or secondary node"
+                                        " for %s" % str(disk))
+      if pnode == node_name:
+        disk.physical_id = (pnode_info.secondary_ip, port,
+                            snode_info.secondary_ip, port)
+      else: # it must be secondary, we tested above
+        disk.physical_id = (snode_info.secondary_ip, port,
+                            pnode_info.secondary_ip, port)
+    else:
+      disk.physical_id = disk.logical_id
+    return
+
+  def AllocatePort(self):
+    """Allocate a port.
+
+    The port will be recorded in the cluster config.
+
+    """
+    self._OpenConfig()
+
+    self._config_data.cluster.highest_used_port += 1
+    if self._config_data.cluster.highest_used_port >= constants.LAST_DRBD_PORT:
+      raise errors.ConfigurationError, ("The highest used port is greater"
+                                        " than %s. Aborting." %
+                                        constants.LAST_DRBD_PORT)
+    port = self._config_data.cluster.highest_used_port
+
+    self._WriteConfig()
+    return port
+
+  def GetHostKey(self):
+    """Return the rsa hostkey from the config.
+
+    Args: None
+
+    Returns: rsa hostkey
+    """
+    self._OpenConfig()
+    self._ReleaseLock()
+    return self._config_data.cluster.rsahostkeypub
+
+  def AddInstance(self, instance):
+    """Add an instance to the config.
+
+    This should be used after creating a new instance.
+
+    Args:
+      instance: the instance object
+    """
+    if not isinstance(instance, objects.Instance):
+      raise errors.ProgrammerError("Invalid type passed to AddInstance")
+
+    self._OpenConfig()
+    self._config_data.instances[instance.name] = instance
+    self._WriteConfig()
+
+  def MarkInstanceUp(self, instance_name):
+    """Mark the instance status to up in the config.
+
+    """
+    self._OpenConfig()
+
+    if instance_name not in self._config_data.instances:
+      raise errors.ConfigurationError, ("Unknown instance '%s'" %
+                                        instance_name)
+    instance = self._config_data.instances[instance_name]
+    instance.status = "up"
+    self._WriteConfig()
+
+  def RemoveInstance(self, instance_name):
+    """Remove the instance from the configuration.
+
+    """
+    self._OpenConfig()
+
+    if instance_name not in self._config_data.instances:
+      raise errors.ConfigurationError, ("Unknown instance '%s'" %
+                                        instance_name)
+    del self._config_data.instances[instance_name]
+    self._WriteConfig()
+
+  def MarkInstanceDown(self, instance_name):
+    """Mark the status of an instance to down in the configuration.
+
+    """
+
+    self._OpenConfig()
+
+    if instance_name not in self._config_data.instances:
+      raise errors.ConfigurationError, ("Unknown instance '%s'" %
+                                        instance_name)
+    instance = self._config_data.instances[instance_name]
+    instance.status = "down"
+    self._WriteConfig()
+
+  def GetInstanceList(self):
+    """Get the list of instances.
+
+    Returns:
+      array of instances, ex. ['instance2.example.com','instance1.example.com']
+      these contains all the instances, also the ones in Admin_down state
+
+    """
+    self._OpenConfig()
+    self._ReleaseLock()
+
+    return self._config_data.instances.keys()
+
+  def ExpandInstanceName(self, short_name):
+    """Attempt to expand an incomplete instance name.
+
+    """
+    self._OpenConfig()
+    self._ReleaseLock()
+
+    return utils.MatchNameComponent(short_name,
+                                    self._config_data.instances.keys())
+
+  def GetInstanceInfo(self, instance_name):
+    """Returns informations about an instance.
+
+    It takes the information from the configuration file. Other informations of
+    an instance are taken from the live systems.
+
+    Args:
+      instance: name of the instance, ex instance1.example.com
+
+    Returns:
+      the instance object
+
+    """
+    self._OpenConfig()
+    self._ReleaseLock()
+
+    if instance_name not in self._config_data.instances:
+      return None
+
+    return self._config_data.instances[instance_name]
+
+  def AddNode(self, node):
+    """Add a node to the configuration.
+
+    Args:
+      node: an object.Node instance
+
+    """
+    self._OpenConfig()
+    self._config_data.nodes[node.name] = node
+    self._WriteConfig()
+
+  def RemoveNode(self, node_name):
+    """Remove a node from the configuration.
+
+    """
+    self._OpenConfig()
+    if node_name not in self._config_data.nodes:
+      raise errors.ConfigurationError, ("Unknown node '%s'" % node_name)
+
+    del self._config_data.nodes[node_name]
+    self._WriteConfig()
+
+  def ExpandNodeName(self, short_name):
+    """Attempt to expand an incomplete instance name.
+
+    """
+    self._OpenConfig()
+    self._ReleaseLock()
+
+    return utils.MatchNameComponent(short_name,
+                                    self._config_data.nodes.keys())
+
+  def GetNodeInfo(self, node_name):
+    """Get the configuration of a node, as stored in the config.
+
+    Args: node: nodename (tuple) of the node
+
+    Returns: the node object
+
+    """
+    self._OpenConfig()
+    self._ReleaseLock()
+
+    if node_name not in self._config_data.nodes:
+      return None
+
+    return self._config_data.nodes[node_name]
+
+  def GetNodeList(self):
+    """Return the list of nodes which are in the configuration.
+
+    """
+    self._OpenConfig()
+    self._ReleaseLock()
+    return self._config_data.nodes.keys()
+
+  def DumpConfig(self):
+    """Return the entire configuration of the cluster.
+    """
+    self._OpenConfig()
+    self._ReleaseLock()
+    return self._config_data
+
+  def _BumpSerialNo(self):
+    """Bump up the serial number of the config.
+
+    """
+    self._config_data.cluster.serial_no += 1
+
+  def _OpenConfig(self):
+    """Read the config data from disk.
+
+    In case we already have configuration data and the config file has
+    the same mtime as when we read it, we skip the parsing of the
+    file, since de-serialisation could be slow.
+
+    """
+    try:
+      st = os.stat(self._cfg_file)
+    except OSError, err:
+      raise errors.ConfigurationError, "Can't stat config file: %s" % err
+    if (self._config_data is not None and
+        self._config_time is not None and
+        self._config_time == st.st_mtime):
+      # data is current, so skip loading of config file
+      return
+    f = open(self._cfg_file, 'r')
+    try:
+      try:
+        data = objects.ConfigObject.Load(f)
+      except Exception, err:
+        raise errors.ConfigurationError, err
+    finally:
+      f.close()
+    if (not hasattr(data, 'cluster') or
+        not hasattr(data.cluster, 'config_version')):
+      raise errors.ConfigurationError, ("Incomplete configuration"
+                                        " (missing cluster.config_version)")
+    if data.cluster.config_version != constants.CONFIG_VERSION:
+      raise errors.ConfigurationError, ("Cluster configuration version"
+                                        " mismatch, got %s instead of %s" %
+                                        (data.cluster.config_version,
+                                         constants.CONFIG_VERSION))
+    self._config_data = data
+    self._config_time = st.st_mtime
+
+  def _ReleaseLock(self):
+    """xxxx
+    """
+
+  def _DistributeConfig(self):
+    """Distribute the configuration to the other nodes.
+
+    Currently, this only copies the configuration file. In the future,
+    it could be used to encapsulate the 2/3-phase update mechanism.
+
+    """
+    if self._offline:
+      return True
+    bad = False
+    nodelist = self.GetNodeList()
+    myhostname = socket.gethostname()
+
+    tgt_list = []
+    for node in nodelist:
+      nodeinfo = self.GetNodeInfo(node)
+      if nodeinfo.name == myhostname:
+        continue
+      tgt_list.append(node)
+
+    result = rpc.call_upload_file(tgt_list, self._cfg_file)
+    for node in tgt_list:
+      if not result[node]:
+        logger.Error("copy of file %s to node %s failed" %
+                     (self._cfg_file, node))
+        bad = True
+    return not bad
+
+  def _WriteConfig(self, destination=None):
+    """Write the configuration data to persistent storage.
+
+    """
+    if destination is None:
+      destination = self._cfg_file
+    self._BumpSerialNo()
+    dir_name, file_name = os.path.split(destination)
+    fd, name = tempfile.mkstemp('.newconfig', file_name, dir_name)
+    f = os.fdopen(fd, 'w')
+    try:
+      self._config_data.Dump(f)
+      os.fsync(f.fileno())
+    finally:
+      f.close()
+    # we don't need to do os.close(fd) as f.close() did it
+    os.rename(name, destination)
+    self._DistributeConfig()
+
+  def InitConfig(self, node, primary_ip, secondary_ip,
+                 clustername, hostkeypub, mac_prefix, vg_name, def_bridge):
+    """Create the initial cluster configuration.
+
+    It will contain the current node, which will also be the master
+    node, and no instances or operating systmes.
+
+    Args:
+      node: the nodename of the initial node
+      primary_ip: the IP address of the current host
+      secondary_ip: the secondary IP of the current host or None
+      clustername: the name of the cluster
+      hostkeypub: the public hostkey of this host
+    """
+
+    hu_port = constants.FIRST_DRBD_PORT - 1
+    globalconfig = objects.Cluster(config_version=constants.CONFIG_VERSION,
+                                   serial_no=1, master_node=node,
+                                   name=clustername,
+                                   rsahostkeypub=hostkeypub,
+                                   highest_used_port=hu_port,
+                                   mac_prefix=mac_prefix,
+                                   volume_group_name=vg_name,
+                                   default_bridge=def_bridge)
+    if secondary_ip is None:
+      secondary_ip = primary_ip
+    nodeconfig = objects.Node(name=node, primary_ip=primary_ip,
+                              secondary_ip=secondary_ip)
+
+    self._config_data = objects.ConfigData(nodes={node: nodeconfig},
+                                           instances={},
+                                           cluster=globalconfig)
+    self._WriteConfig()
+
+  def GetClusterName(self):
+    """Return the cluster name.
+
+    """
+    self._OpenConfig()
+    self._ReleaseLock()
+    return self._config_data.cluster.name
+
+  def GetVGName(self):
+    """Return the volume group name.
+
+    """
+    self._OpenConfig()
+    self._ReleaseLock()
+    return self._config_data.cluster.volume_group_name
+
+  def GetDefBridge(self):
+    """Return the default bridge.
+
+    """
+    self._OpenConfig()
+    self._ReleaseLock()
+    return self._config_data.cluster.default_bridge
+
+  def GetMACPrefix(self):
+    """Return the mac prefix.
+
+    """
+    self._OpenConfig()
+    self._ReleaseLock()
+    return self._config_data.cluster.mac_prefix
+
+  def GetMaster(self):
+    """Get the name of the master.
+
+    """
+    self._OpenConfig()
+    self._ReleaseLock()
+    return self._config_data.cluster.master_node
+
+  def SetMaster(self, master_node):
+    """Change the master of the cluster.
+
+    As with all changes, the configuration data will be distributed to
+    all nodes.
+
+    This function is used for manual master failover.
+
+    """
+    self._OpenConfig()
+    self._config_data.cluster.master_node = master_node
+    self._WriteConfig()
+    self._ReleaseLock()
diff --git a/lib/constants.py b/lib/constants.py
new file mode 100644
index 0000000000000000000000000000000000000000..5eff9a6823b2153afe7b154b0603639e72ba1598
--- /dev/null
+++ b/lib/constants.py
@@ -0,0 +1,113 @@
+#!/usr/bin/python
+#
+
+# Copyright (C) 2006, 2007 Google Inc.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful, but
+# WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+# General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
+# 02110-1301, USA.
+
+
+"""Module holding different constants."""
+
+# various versions
+CONFIG_VERSION = 2
+PROTOCOL_VERSION = 2
+RELEASE_VERSION = "1.2a1"
+OS_API_VERSION = 4
+EXPORT_VERSION = 0
+
+
+# file paths
+DATA_DIR = "/var/lib/ganeti"
+CLUSTER_CONF_FILE = DATA_DIR + "/config.data"
+CLUSTER_NAME_FILE = DATA_DIR + "/cluster-name"
+SSL_CERT_FILE = DATA_DIR + "/server.pem"
+HYPERCONF_FILE = DATA_DIR + "/hypervisor"
+WATCHER_STATEFILE = DATA_DIR + "/restart_state"
+
+ETC_DIR = "/etc/ganeti"
+
+MASTER_CRON_FILE = ETC_DIR + "/master-cron"
+MASTER_CRON_LINK = "/etc/cron.d/ganeti-master-cron"
+NODE_INITD_SCRIPT = "/etc/init.d/ganeti"
+NODE_INITD_NAME = "ganeti"
+DEFAULT_NODED_PORT = 1811
+FIRST_DRBD_PORT = 11000
+LAST_DRBD_PORT = 14999
+MASTER_INITD_SCRIPT = "/etc/init.d/ganeti-master"
+MASTER_INITD_NAME = "ganeti-master"
+
+LOG_DIR = "/var/log/ganeti"
+LOG_OS_DIR = LOG_DIR + "/os"
+LOG_NODESERVER = LOG_DIR + "/node-daemon.log"
+
+OS_DIR = "/srv/ganeti/os"
+EXPORT_DIR = "/srv/ganeti/export"
+
+EXPORT_CONF_FILE = "config.ini"
+
+# hooks-related constants
+HOOKS_BASE_DIR = "/etc/ganeti/hooks"
+HOOKS_PHASE_PRE = "pre"
+HOOKS_PHASE_POST = "post"
+HOOKS_VERSION = 1
+
+# hooks subject type (what object type does the LU deal with)
+HTYPE_CLUSTER = "CLUSTER"
+HTYPE_NODE = "NODE"
+HTYPE_INSTANCE = "INSTANCE"
+
+HKR_SKIP = 0
+HKR_FAIL = 1
+HKR_SUCCESS = 2
+
+# disk template types
+DT_DISKLESS = "diskless"
+DT_PLAIN = "plain"
+DT_LOCAL_RAID1 = "local_raid1"
+DT_REMOTE_RAID1 = "remote_raid1"
+
+# instance creation modem
+INSTANCE_CREATE = "create"
+INSTANCE_IMPORT = "import"
+
+DISK_TEMPLATES = frozenset([DT_DISKLESS, DT_PLAIN,
+                            DT_LOCAL_RAID1, DT_REMOTE_RAID1])
+
+# file groups
+CLUSTER_CONF_FILES = ["/etc/hosts",
+                      "/etc/ssh/ssh_known_hosts",
+                      "/etc/ssh/ssh_host_dsa_key",
+                      "/etc/ssh/ssh_host_dsa_key.pub",
+                      "/etc/ssh/ssh_host_rsa_key",
+                      "/etc/ssh/ssh_host_rsa_key.pub",
+                      "/root/.ssh/authorized_keys",
+                      "/root/.ssh/id_dsa",
+                      "/root/.ssh/id_dsa.pub",
+                      CLUSTER_CONF_FILE,
+                      SSL_CERT_FILE,
+                      MASTER_CRON_FILE,
+                      ]
+
+MASTER_CONFIGFILES = [MASTER_CRON_LINK,
+                      "/etc/rc2.d/S21%s" % MASTER_INITD_NAME]
+
+NODE_CONFIGFILES = [NODE_INITD_SCRIPT,
+                    "/etc/rc2.d/S20%s" % NODE_INITD_NAME,
+                    "/etc/rc0.d/K80%s" % NODE_INITD_NAME]
+
+# import/export config options
+INISECT_EXP = "export"
+INISECT_INS = "instance"
diff --git a/lib/errors.py b/lib/errors.py
new file mode 100644
index 0000000000000000000000000000000000000000..7bcd564415c04625ac7ae5523bb59f876c2b3c41
--- /dev/null
+++ b/lib/errors.py
@@ -0,0 +1,170 @@
+#!/usr/bin/python
+#
+
+# Copyright (C) 2006, 2007 Google Inc.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful, but
+# WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+# General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
+# 02110-1301, USA.
+
+
+"""Ganeti exception handling"""
+
+
+class GenericError(Exception):
+  """Base exception for Ganeti.
+
+  """
+  pass
+
+
+class LVMError(GenericError):
+  """LVM-related exception.
+
+  This exception codifies problems with LVM setup.
+
+  """
+  pass
+
+
+class LockError(GenericError):
+  """Lock error exception.
+
+  This signifies problems in the locking subsystem.
+
+  """
+  pass
+
+
+class HypervisorError(GenericError):
+  """Hypervisor-related exception.
+
+  This is raised in case we can't communicate with the hypervisor
+  properly.
+
+  """
+  pass
+
+
+class ProgrammerError(GenericError):
+  """Programming-related error.
+
+  This is raised in cases we determine that the calling conventions
+  have been violated, meaning we got some desynchronisation between
+  parts of our code. It signifies a real programming bug.
+
+  """
+  pass
+
+
+class BlockDeviceError(GenericError):
+  """Block-device related exception.
+
+  This is raised in case we can't setup the instance's block devices
+  properly.
+
+  """
+  pass
+
+
+class ConfigurationError(GenericError):
+  """Configuration related exception.
+
+  Things like having an instance with a primary node that doesn't
+  exist in the config or such raise this exception.
+
+  """
+  pass
+
+
+class RemoteError(GenericError):
+  """Programming-related error on remote call.
+
+  This is raised when an unhandled error occurs in a call to a
+  remote node.  It usually signifies a real programming bug.
+
+  """
+  pass
+
+
+class InvalidOS(GenericError):
+  """Missing OS on node.
+
+  This is raised when an OS exists on the master (or is otherwise
+  requested to the code) but not on the target node.
+
+  This exception has two arguments:
+    - the name of the os
+    - the reason why we consider this an invalid OS (text of error message)
+
+  """
+
+
+class ParameterError(GenericError):
+  """A passed parameter to a command is invalid.
+
+  This is raised when the parameter passed to a request function is
+  invalid. Correct code should have verified this before passing the
+  request structure.
+
+  The argument to this exception should be the parameter name.
+
+  """
+  pass
+
+
+class OpPrereqError(GenericError):
+  """Prerequisites for the OpCode are not fulfilled.
+
+  """
+
+class OpExecError(GenericError):
+  """Error during OpCode execution.
+
+  """
+
+class OpCodeUnknown(GenericError):
+  """Unknown opcode submitted.
+
+  This signifies a mismatch between the definitions on the client and
+  server side.
+
+  """
+
+class HooksFailure(GenericError):
+  """A generic hook failure.
+
+  This signifies usually a setup misconfiguration.
+
+  """
+
+class HooksAbort(HooksFailure):
+  """A required hook has failed.
+
+  This caused an abort of the operation in the initial phase. This
+  exception always has an attribute args which is a list of tuples of:
+    - node: the source node on which this hooks has failed
+    - script: the name of the script which aborted the run
+
+  """
+
+class UnitParseError(GenericError):
+  """Unable to parse size unit.
+
+  """
+
+
+class SshKeyError(GenericError):
+  """Invalid SSH key.
+  """
diff --git a/lib/hypervisor.py b/lib/hypervisor.py
new file mode 100644
index 0000000000000000000000000000000000000000..39a628ba96caa06daa4b5e4473003342e57dbc61
--- /dev/null
+++ b/lib/hypervisor.py
@@ -0,0 +1,496 @@
+#!/usr/bin/python
+#
+
+# Copyright (C) 2006, 2007 Google Inc.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful, but
+# WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+# General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
+# 02110-1301, USA.
+
+
+"""Module that abstracts the virtualisation interface
+
+"""
+
+import time
+import os
+from cStringIO import StringIO
+
+from ganeti import utils
+from ganeti import logger
+from ganeti import ssconf
+from ganeti.errors import HypervisorError
+
+_HT_XEN30 = "xen-3.0"
+_HT_FAKE = "fake"
+
+VALID_HTYPES = (_HT_XEN30, _HT_FAKE)
+
+def GetHypervisor():
+  """Return a Hypervisor instance.
+
+  This function parses the cluster hypervisor configuration file and
+  instantiates a class based on the value of this file.
+
+  """
+  ht_kind = ssconf.SimpleStore().GetHypervisorType()
+  if ht_kind == _HT_XEN30:
+    cls = XenHypervisor
+  elif ht_kind == _HT_FAKE:
+    cls = FakeHypervisor
+  else:
+    raise HypervisorError, "Unknown hypervisor type '%s'" % ht_kind
+  return cls()
+
+
+class BaseHypervisor(object):
+  """Abstract virtualisation technology interface
+
+  The goal is that all aspects of the virtualisation technology must
+  be abstracted away from the rest of code.
+
+  """
+  def __init__(self):
+    pass
+
+  def StartInstance(self, instance, block_devices, extra_args):
+    """Start an instance."""
+    raise NotImplementedError
+
+  def StopInstance(self, instance, force=False):
+    """Stop an instance."""
+    raise NotImplementedError
+
+  def ListInstances(self):
+    """Get the list of running instances."""
+    raise NotImplementedError
+
+  def GetInstanceInfo(self, instance_name):
+    """Get instance properties.
+
+    Args:
+      instance_name: the instance name
+
+    Returns:
+      (name, id, memory, vcpus, state, times)
+
+    """
+    raise NotImplementedError
+
+  def GetAllInstancesInfo(self):
+    """Get properties of all instances.
+
+    Returns:
+      [(name, id, memory, vcpus, stat, times),...]
+    """
+    raise NotImplementedError
+
+  def GetNodeInfo(self):
+    """Return information about the node.
+
+    The return value is a dict, which has to have the following items:
+      (all values in MiB)
+      - memory_total: the total memory size on the node
+      - memory_free: the available memory on the node for instances
+      - memory_dom0: the memory used by the node itself, if available
+
+    """
+    raise NotImplementedError
+
+  @staticmethod
+  def GetShellCommandForConsole(instance_name):
+    """Return a command for connecting to the console of an instance.
+
+    """
+    raise NotImplementedError
+
+  def Verify(self):
+    """Verify the hypervisor.
+
+    """
+    raise NotImplementedError
+
+
+class XenHypervisor(BaseHypervisor):
+  """Xen hypervisor interface"""
+
+  @staticmethod
+  def _WriteConfigFile(instance, block_devices, extra_args):
+    """Create a Xen 3.0 config file.
+
+    """
+
+    config = StringIO()
+    config.write("# this is autogenerated by Ganeti, please do not edit\n#\n")
+    config.write("kernel = '/boot/vmlinuz-2.6-xenU'\n")
+    config.write("memory = %d\n" % instance.memory)
+    config.write("vcpus = %d\n" % instance.vcpus)
+    config.write("name = '%s'\n" % instance.name)
+
+    vif_data = []
+    for nic in instance.nics:
+      nic_str = "mac=%s, bridge=%s" % (nic.mac, nic.bridge)
+      ip = getattr(nic, "ip", None)
+      if ip is not None:
+        nic_str += ", ip=%s" % ip
+      vif_data.append("'%s'" % nic_str)
+
+    config.write("vif = [%s]\n" % ",".join(vif_data))
+
+    disk_data = ["'phy:%s,%s,w'" % (rldev.dev_path, cfdev.iv_name)
+                 for cfdev, rldev in block_devices]
+    config.write("disk = [%s]\n" % ",".join(disk_data))
+
+    config.write("root = '/dev/sda ro'\n")
+    config.write("on_poweroff = 'destroy'\n")
+    config.write("on_reboot = 'restart'\n")
+    config.write("on_crash = 'restart'\n")
+    if extra_args:
+      config.write("extra = '%s'\n" % extra_args)
+    # just in case it exists
+    utils.RemoveFile("/etc/xen/auto/%s" % instance.name)
+    f = open("/etc/xen/%s" % instance.name, "w")
+    f.write(config.getvalue())
+    f.close()
+    return True
+
+  @staticmethod
+  def _RemoveConfigFile(instance):
+    """Remove the xen configuration file.
+
+    """
+    utils.RemoveFile("/etc/xen/%s" % instance.name)
+
+  @staticmethod
+  def _GetXMList(include_node):
+    """Return the list of running instances.
+
+    If the `include_node` argument is True, then we return information
+    for dom0 also, otherwise we filter that from the return value.
+
+    The return value is a list of (name, id, memory, vcpus, state, time spent)
+
+    """
+    for dummy in range(5):
+      result = utils.RunCmd(["xm", "list"])
+      if not result.failed:
+        break
+      logger.Error("xm list failed (%s): %s" % (result.fail_reason,
+                                                result.output))
+      time.sleep(1)
+
+    if result.failed:
+      raise HypervisorError("xm list failed, retries exceeded (%s): %s" %
+                            (result.fail_reason, result.stderr))
+
+    # skip over the heading and the domain 0 line (optional)
+    if include_node:
+      to_skip = 1
+    else:
+      to_skip = 2
+    lines = result.stdout.splitlines()[to_skip:]
+    result = []
+    for line in lines:
+      # The format of lines is:
+      # Name      ID Mem(MiB) VCPUs State  Time(s)
+      # Domain-0   0  3418     4 r-----    266.2
+      data = line.split()
+      if len(data) != 6:
+        raise HypervisorError("Can't parse output of xm list, line: %s" % line)
+      try:
+        data[1] = int(data[1])
+        data[2] = int(data[2])
+        data[3] = int(data[3])
+        data[5] = float(data[5])
+      except ValueError, err:
+        raise HypervisorError("Can't parse output of xm list,"
+                              " line: %s, error: %s" % (line, err))
+      result.append(data)
+    return result
+
+  def ListInstances(self):
+    """Get the list of running instances.
+
+    """
+    xm_list = self._GetXMList(False)
+    names = [info[0] for info in xm_list]
+    return names
+
+  def GetInstanceInfo(self, instance_name):
+    """Get instance properties.
+
+    Args:
+      instance_name: the instance name
+
+    Returns:
+      (name, id, memory, vcpus, stat, times)
+    """
+    xm_list = self._GetXMList(instance_name=="Domain-0")
+    result = None
+    for data in xm_list:
+      if data[0] == instance_name:
+        result = data
+        break
+    return result
+
+  def GetAllInstancesInfo(self):
+    """Get properties of all instances.
+
+    Returns:
+      [(name, id, memory, vcpus, stat, times),...]
+    """
+    xm_list = self._GetXMList(False)
+    return xm_list
+
+  def StartInstance(self, instance, block_devices, extra_args):
+    """Start an instance."""
+    self._WriteConfigFile(instance, block_devices, extra_args)
+    result = utils.RunCmd(["xm", "create", instance.name])
+
+    if result.failed:
+      raise HypervisorError("Failed to start instance %s: %s" %
+                            (instance.name, result.fail_reason))
+
+  def StopInstance(self, instance, force=False):
+    """Stop an instance."""
+    self._RemoveConfigFile(instance)
+    if force:
+      command = ["xm", "destroy", instance.name]
+    else:
+      command = ["xm", "shutdown", instance.name]
+    result = utils.RunCmd(command)
+
+    if result.failed:
+      raise HypervisorError("Failed to stop instance %s: %s" %
+                            (instance.name, result.fail_reason))
+
+  def GetNodeInfo(self):
+    """Return information about the node.
+
+    The return value is a dict, which has to have the following items:
+      (all values in MiB)
+      - memory_total: the total memory size on the node
+      - memory_free: the available memory on the node for instances
+      - memory_dom0: the memory used by the node itself, if available
+
+    """
+    # note: in xen 3, memory has changed to total_memory
+    result = utils.RunCmd(["xm", "info"])
+    if result.failed:
+      logger.Error("Can't run 'xm info': %s" % result.fail_reason)
+      return None
+
+    xmoutput = result.stdout.splitlines()
+    result = {}
+    for line in xmoutput:
+      splitfields = line.split(":", 1)
+
+      if len(splitfields) > 1:
+        key = splitfields[0].strip()
+        val = splitfields[1].strip()
+        if key == 'memory' or key == 'total_memory':
+          result['memory_total'] = int(val)
+        elif key == 'free_memory':
+          result['memory_free'] = int(val)
+    dom0_info = self.GetInstanceInfo("Domain-0")
+    if dom0_info is not None:
+      result['memory_dom0'] = dom0_info[2]
+
+    return result
+
+  @staticmethod
+  def GetShellCommandForConsole(instance_name):
+    """Return a command for connecting to the console of an instance.
+
+    """
+    return "xm console %s" % instance_name
+
+
+  def Verify(self):
+    """Verify the hypervisor.
+
+    For Xen, this verifies that the xend process is running.
+
+    """
+    if not utils.CheckDaemonAlive('/var/run/xend.pid', 'xend'):
+      return "xend daemon is not running"
+
+
+class FakeHypervisor(BaseHypervisor):
+  """Fake hypervisor interface.
+
+  This can be used for testing the ganeti code without having to have
+  a real virtualisation software installed.
+
+  """
+
+  _ROOT_DIR = "/var/run/ganeti-fake-hypervisor"
+
+  def __init__(self):
+    BaseHypervisor.__init__(self)
+    if not os.path.exists(self._ROOT_DIR):
+      os.mkdir(self._ROOT_DIR)
+
+  def ListInstances(self):
+    """Get the list of running instances.
+
+    """
+    return os.listdir(self._ROOT_DIR)
+
+  def GetInstanceInfo(self, instance_name):
+    """Get instance properties.
+
+    Args:
+      instance_name: the instance name
+
+    Returns:
+      (name, id, memory, vcpus, stat, times)
+    """
+    file_name = "%s/%s" % (self._ROOT_DIR, instance_name)
+    if not os.path.exists(file_name):
+      return None
+    try:
+      fh = file(file_name, "r")
+      try:
+        inst_id = fh.readline().strip()
+        memory = fh.readline().strip()
+        vcpus = fh.readline().strip()
+        stat = "---b-"
+        times = "0"
+        return (instance_name, inst_id, memory, vcpus, stat, times)
+      finally:
+        fh.close()
+    except IOError, err:
+      raise HypervisorError("Failed to list instance %s: %s" %
+                            (instance_name, err))
+
+  def GetAllInstancesInfo(self):
+    """Get properties of all instances.
+
+    Returns:
+      [(name, id, memory, vcpus, stat, times),...]
+    """
+    data = []
+    for file_name in os.listdir(self._ROOT_DIR):
+      try:
+        fh = file(self._ROOT_DIR+"/"+file_name, "r")
+        inst_id = "-1"
+        memory = "0"
+        stat = "-----"
+        times = "-1"
+        try:
+          inst_id = fh.readline().strip()
+          memory = fh.readline().strip()
+          vcpus = fh.readline().strip()
+          stat = "---b-"
+          times = "0"
+        finally:
+          fh.close()
+        data.append((file_name, inst_id, memory, vcpus, stat, times))
+      except IOError, err:
+        raise HypervisorError("Failed to list instances: %s" % err)
+    return data
+
+  def StartInstance(self, instance, force, extra_args):
+    """Start an instance.
+
+    For the fake hypervisor, it just creates a file in the base dir,
+    creating an exception if it already exists. We don't actually
+    handle race conditions properly, since these are *FAKE* instances.
+
+    """
+    file_name = self._ROOT_DIR + "/%s" % instance.name
+    if os.path.exists(file_name):
+      raise HypervisorError("Failed to start instance %s: %s" %
+                            (instance.name, "already running"))
+    try:
+      fh = file(file_name, "w")
+      try:
+        fh.write("0\n%d\n%d\n" % (instance.memory, instance.vcpus))
+      finally:
+        fh.close()
+    except IOError, err:
+      raise HypervisorError("Failed to start instance %s: %s" %
+                            (instance.name, err))
+
+  def StopInstance(self, instance, force=False):
+    """Stop an instance.
+
+    For the fake hypervisor, this just removes the file in the base
+    dir, if it exist, otherwise we raise an exception.
+
+    """
+    file_name = self._ROOT_DIR + "/%s" % instance.name
+    if not os.path.exists(file_name):
+      raise HypervisorError("Failed to stop instance %s: %s" %
+                            (instance.name, "not running"))
+    utils.RemoveFile(file_name)
+
+  def GetNodeInfo(self):
+    """Return information about the node.
+
+    The return value is a dict, which has to have the following items:
+      (all values in MiB)
+      - memory_total: the total memory size on the node
+      - memory_free: the available memory on the node for instances
+      - memory_dom0: the memory used by the node itself, if available
+
+    """
+    # global ram usage from the xm info command
+    # memory                 : 3583
+    # free_memory            : 747
+    # note: in xen 3, memory has changed to total_memory
+    try:
+      fh = file("/proc/meminfo")
+      try:
+        data = fh.readlines()
+      finally:
+        fh.close()
+    except IOError, err:
+      raise HypervisorError("Failed to list node info: %s" % err)
+
+    result = {}
+    sum_free = 0
+    for line in data:
+      splitfields = line.split(":", 1)
+
+      if len(splitfields) > 1:
+        key = splitfields[0].strip()
+        val = splitfields[1].strip()
+        if key == 'MemTotal':
+          result['memory_total'] = int(val.split()[0])/1024
+        elif key in ('MemFree', 'Buffers', 'Cached'):
+          sum_free += int(val.split()[0])/1024
+        elif key == 'Active':
+          result['memory_dom0'] = int(val.split()[0])/1024
+
+    result['memory_free'] = sum_free
+    return result
+
+  @staticmethod
+  def GetShellCommandForConsole(instance_name):
+    """Return a command for connecting to the console of an instance.
+
+    """
+    return "echo Console not available for fake hypervisor"
+
+  def Verify(self):
+    """Verify the hypervisor.
+
+    For the fake hypervisor, it just checks the existence of the base
+    dir.
+
+    """
+    if not os.path.exists(self._ROOT_DIR):
+      return "The required directory '%s' does not exist." % self._ROOT_DIR
diff --git a/lib/logger.py b/lib/logger.py
new file mode 100644
index 0000000000000000000000000000000000000000..875ba3e235deba7d06df226cf62baaff1de1080c
--- /dev/null
+++ b/lib/logger.py
@@ -0,0 +1,238 @@
+#!/usr/bin/python
+#
+
+# Copyright (C) 2006, 2007 Google Inc.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful, but
+# WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+# General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
+# 02110-1301, USA.
+
+
+"""Logging for Ganeti
+
+This module abstracts the logging handling away from the rest of the
+Ganeti code. It offers some utility functions for easy logging.
+"""
+
+# pylint: disable-msg=W0603,C0103
+
+import sys
+import logging
+import os, os.path
+
+from ganeti import constants
+
+_program = '(unknown)'
+_errlog = None
+_inflog = None
+_dbglog = None
+_stdout = None
+_stderr = None
+_debug = False
+
+
+def _SetDestination(name, filename, stream=None):
+  """Configure the destination for a given logger
+
+  This function configures the logging destination for a given loger.
+  Parameters:
+    - name: the logger name
+    - filename: if not empty, log messages will be written (also) to this file
+    - stream: if not none, log messages will be output (also) to this stream
+
+  Returns:
+    - the logger identified by the `name` argument
+  """
+  ret = logging.getLogger(name)
+
+  if filename:
+    fmtr = logging.Formatter('%(asctime)s %(message)s')
+
+    hdlr = logging.FileHandler(filename)
+    hdlr.setFormatter(fmtr)
+    ret.addHandler(hdlr)
+
+  if stream:
+    if name in ('error', 'info', 'debug'):
+      fmtr = logging.Formatter('%(asctime)s %(message)s')
+    else:
+      fmtr = logging.Formatter('%(message)s')
+    hdlr = logging.StreamHandler(stream)
+    hdlr.setFormatter(fmtr)
+    ret.addHandler(hdlr)
+
+  ret.setLevel(logging.INFO)
+
+  return ret
+
+
+def _GenericSetup(program, errfile, inffile, dbgfile,
+                  twisted_workaround=False):
+  """Configure logging based on arguments
+
+  Arguments:
+    - name of program
+    - error log filename
+    - info log filename
+    - debug log filename
+    - twisted_workaround: if true, emit all messages to stderr
+  """
+  global _program
+  global _errlog
+  global _inflog
+  global _dbglog
+  global _stdout
+  global _stderr
+
+  _program = program
+  if twisted_workaround:
+    _errlog = _SetDestination('error', None, sys.stderr)
+    _inflog = _SetDestination('info', None, sys.stderr)
+    _dbglog = _SetDestination('debug', None, sys.stderr)
+  else:
+    _errlog = _SetDestination('error', errfile)
+    _inflog = _SetDestination('info', inffile)
+    _dbglog = _SetDestination('debug', dbgfile)
+
+  _stdout = _SetDestination('user', None, sys.stdout)
+  _stderr = _SetDestination('stderr', None, sys.stderr)
+
+
+def SetupLogging(twisted_workaround=False, debug=False, program='ganeti'):
+  """Setup logging for ganeti
+
+  On failure, a check is made whether process is run by root or not,
+  and an appropriate error message is printed on stderr, then process
+  exits.
+
+  This function is just a wraper over `_GenericSetup()` using specific
+  arguments.
+
+  Parameter:
+    twisted_workaround: passed to `_GenericSetup()`
+
+  """
+  try:
+    _GenericSetup(program,
+                  os.path.join(constants.LOG_DIR, "errors"),
+                  os.path.join(constants.LOG_DIR, "info"),
+                  os.path.join(constants.LOG_DIR, "debug"),
+                  twisted_workaround)
+  except IOError:
+    # The major reason to end up here is that we're being run as a
+    # non-root user.  We might also get here if xen has not been
+    # installed properly.  This is not the correct place to enforce
+    # being run by root; nevertheless, here makes sense because here
+    # is where we first notice it.
+    if os.getuid() != 0:
+      sys.stderr.write('This program must be run by the superuser.\n')
+    else:
+      sys.stderr.write('Unable to open log files.  Incomplete system?\n')
+
+    sys.exit(2)
+
+  global _debug
+  _debug = debug
+
+
+def _WriteEntry(log, txt):
+  """
+  Write a message to a given log.
+  Splits multi-line messages up into a series of log writes, to
+  keep consistent format on lines in file.
+
+  Parameters:
+    - log: the destination log
+    - txt: the message
+
+  """
+  if log is None:
+    sys.stderr.write("Logging system not initialized while processing"
+                     " message:\n")
+    sys.stderr.write("%s\n" % txt)
+    return
+
+  lines = txt.split('\n')
+
+  spaces = ' ' * len(_program) + '| '
+
+  lines = ([ _program + ': ' + lines[0] ] +
+           map(lambda a: spaces + a, lines[1:]))
+
+  for line in lines:
+    log.log(logging.INFO, line)
+
+
+def ToStdout(txt):
+  """Write a message to stdout only, bypassing the logging system
+
+  Parameters:
+    - txt: the message
+
+  """
+  sys.stdout.write(txt + '\n')
+  sys.stdout.flush()
+
+
+def ToStderr(txt):
+  """Write a message to stderr only, bypassing the logging system
+
+  Parameters:
+    - txt: the message
+
+  """
+  sys.stderr.write(txt + '\n')
+  sys.stderr.flush()
+
+
+def Error(txt):
+  """Write a message to our error log
+
+  Parameters:
+    - dbg: if true, the message will also be output to stderr
+    - txt: the log message
+
+  """
+  _WriteEntry(_errlog, txt)
+  sys.stderr.write(txt + '\n')
+
+
+def Info(txt):
+  """Write a message to our general messages log
+
+  If the global debug flag is true, the log message will also be
+  output to stderr.
+
+  Parameters:
+    - txt: the log message
+
+  """
+  _WriteEntry(_inflog, txt)
+  if _debug:
+    _WriteEntry(_stderr, txt)
+
+
+def Debug(txt):
+  """Write a message to the debug log
+
+  If the global debug flag is true, the log message will also be
+  output to stderr.
+
+  Parameters:
+    - txt: the log message
+
+  """
+  _WriteEntry(_dbglog, txt)
+  if _debug:
+    _WriteEntry(_stderr, txt)
diff --git a/lib/mcpu.py b/lib/mcpu.py
new file mode 100644
index 0000000000000000000000000000000000000000..95370688dc5caa7212c2feec1b10413d5aa920ad
--- /dev/null
+++ b/lib/mcpu.py
@@ -0,0 +1,238 @@
+#!/usr/bin/python
+#
+
+# Copyright (C) 2006, 2007 Google Inc.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful, but
+# WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+# General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
+# 02110-1301, USA.
+
+
+"""Module implementing the logic behind the cluster operations
+
+This module implements the logic for doing operations in the cluster. There
+are two kinds of classes defined:
+  - logical units, which know how to deal with their specific opcode only
+  - the processor, which dispatches the opcodes to their logical units
+
+"""
+
+
+import os
+import os.path
+import time
+
+from ganeti import opcodes
+from ganeti import logger
+from ganeti import constants
+from ganeti import utils
+from ganeti import errors
+from ganeti import rpc
+from ganeti import cmdlib
+from ganeti import config
+from ganeti import ssconf
+
+class Processor(object):
+  """Object which runs OpCodes"""
+  DISPATCH_TABLE = {
+    # Cluster
+    opcodes.OpInitCluster: cmdlib.LUInitCluster,
+    opcodes.OpDestroyCluster: cmdlib.LUDestroyCluster,
+    opcodes.OpQueryClusterInfo: cmdlib.LUQueryClusterInfo,
+    opcodes.OpClusterCopyFile: cmdlib.LUClusterCopyFile,
+    opcodes.OpRunClusterCommand: cmdlib.LURunClusterCommand,
+    opcodes.OpVerifyCluster: cmdlib.LUVerifyCluster,
+    opcodes.OpMasterFailover: cmdlib.LUMasterFailover,
+    opcodes.OpDumpClusterConfig: cmdlib.LUDumpClusterConfig,
+    # node lu
+    opcodes.OpAddNode: cmdlib.LUAddNode,
+    opcodes.OpQueryNodes: cmdlib.LUQueryNodes,
+    opcodes.OpQueryNodeData: cmdlib.LUQueryNodeData,
+    opcodes.OpRemoveNode: cmdlib.LURemoveNode,
+    # instance lu
+    opcodes.OpCreateInstance: cmdlib.LUCreateInstance,
+    opcodes.OpRemoveInstance: cmdlib.LURemoveInstance,
+    opcodes.OpActivateInstanceDisks: cmdlib.LUActivateInstanceDisks,
+    opcodes.OpShutdownInstance: cmdlib.LUShutdownInstance,
+    opcodes.OpStartupInstance: cmdlib.LUStartupInstance,
+    opcodes.OpDeactivateInstanceDisks: cmdlib.LUDeactivateInstanceDisks,
+    opcodes.OpAddMDDRBDComponent: cmdlib.LUAddMDDRBDComponent,
+    opcodes.OpRemoveMDDRBDComponent: cmdlib.LURemoveMDDRBDComponent,
+    opcodes.OpReplaceDisks: cmdlib.LUReplaceDisks,
+    opcodes.OpFailoverInstance: cmdlib.LUFailoverInstance,
+    opcodes.OpConnectConsole: cmdlib.LUConnectConsole,
+    opcodes.OpQueryInstances: cmdlib.LUQueryInstances,
+    opcodes.OpQueryInstanceData: cmdlib.LUQueryInstanceData,
+    opcodes.OpSetInstanceParms: cmdlib.LUSetInstanceParms,
+    # os lu
+    opcodes.OpDiagnoseOS: cmdlib.LUDiagnoseOS,
+    # exports lu
+    opcodes.OpQueryExports: cmdlib.LUQueryExports,
+    opcodes.OpExportInstance: cmdlib.LUExportInstance,
+    }
+
+
+  def __init__(self):
+    """Constructor for Processor
+
+    """
+    self.cfg = None
+    self.sstore = None
+
+  def ExecOpCode(self, op, feedback_fn):
+    """Execute an opcode.
+
+    Args:
+     - cfg: the configuration in which we execute this opcode
+     - opcode: the opcode to be executed
+     - feedback_fn: the feedback function (taking one string) to be run when
+                    interesting events are happening
+
+    """
+    if not isinstance(op, opcodes.OpCode):
+      raise errors.ProgrammerError, ("Non-opcode instance passed"
+                                     " to ExecOpcode")
+
+    lu_class = self.DISPATCH_TABLE.get(op.__class__, None)
+    if lu_class is None:
+      raise errors.OpCodeUnknown, "Unknown opcode"
+
+    if lu_class.REQ_CLUSTER and self.cfg is None:
+      self.cfg = config.ConfigWriter()
+      self.sstore = ssconf.SimpleStore()
+    lu = lu_class(self, op, self.cfg, self.sstore)
+    lu.CheckPrereq()
+    do_hooks = lu_class.HPATH is not None
+    if do_hooks:
+      hm = HooksMaster(rpc.call_hooks_runner, self.cfg, lu)
+      hm.RunPhase(constants.HOOKS_PHASE_PRE)
+    result = lu.Exec(feedback_fn)
+    if do_hooks:
+      hm.RunPhase(constants.HOOKS_PHASE_POST)
+    return result
+
+  def ChainOpCode(self, op, feedback_fn):
+    """Chain and execute an opcode.
+
+    This is used by LUs when they need to execute a child LU.
+
+    Args:
+     - opcode: the opcode to be executed
+     - feedback_fn: the feedback function (taking one string) to be run when
+                    interesting events are happening
+
+    """
+    if not isinstance(op, opcodes.OpCode):
+      raise errors.ProgrammerError, ("Non-opcode instance passed"
+                                     " to ExecOpcode")
+
+    lu_class = self.DISPATCH_TABLE.get(op.__class__, None)
+    if lu_class is None:
+      raise errors.OpCodeUnknown, "Unknown opcode"
+
+    if lu_class.REQ_CLUSTER and self.cfg is None:
+      self.cfg = config.ConfigWriter()
+      self.sstore = ssconf.SimpleStore()
+    do_hooks = lu_class.HPATH is not None
+    lu = lu_class(self, op, self.cfg, self.sstore)
+    lu.CheckPrereq()
+    #if do_hooks:
+    #  hm = HooksMaster(rpc.call_hooks_runner, self.cfg, lu)
+    #  hm.RunPhase(constants.HOOKS_PHASE_PRE)
+    result = lu.Exec(feedback_fn)
+    #if do_hooks:
+    #  hm.RunPhase(constants.HOOKS_PHASE_POST)
+    return result
+
+
+class HooksMaster(object):
+  """Hooks master.
+
+  This class distributes the run commands to the nodes based on the
+  specific LU class.
+
+  In order to remove the direct dependency on the rpc module, the
+  constructor needs a function which actually does the remote
+  call. This will usually be rpc.call_hooks_runner, but any function
+  which behaves the same works.
+
+  """
+  def __init__(self, callfn, cfg, lu):
+    self.callfn = callfn
+    self.cfg = cfg
+    self.lu = lu
+    self.op = lu.op
+    self.hpath = self.lu.HPATH
+    self.env, node_list_pre, node_list_post = self._BuildEnv()
+
+    self.node_list = {constants.HOOKS_PHASE_PRE: node_list_pre,
+                      constants.HOOKS_PHASE_POST: node_list_post}
+
+  def _BuildEnv(self):
+    """Compute the environment and the target nodes.
+
+    Based on the opcode and the current node list, this builds the
+    environment for the hooks and the target node list for the run.
+
+    """
+    env = {
+      "PATH": "/sbin:/bin:/usr/sbin:/usr/bin",
+      "GANETI_HOOKS_VERSION": constants.HOOKS_VERSION,
+      "GANETI_OP_CODE": self.op.OP_ID,
+      "GANETI_OBJECT_TYPE": self.lu.HTYPE,
+      }
+
+    lu_env, lu_nodes_pre, lu_nodes_post = self.lu.BuildHooksEnv()
+    if lu_env:
+      for key in lu_env:
+        env["GANETI_" + key] = lu_env[key]
+
+    if self.cfg is not None:
+      env["GANETI_CLUSTER"] = self.cfg.GetClusterName()
+      env["GANETI_MASTER"] = self.cfg.GetMaster()
+
+    for key in env:
+      if not isinstance(env[key], str):
+        env[key] = str(env[key])
+
+    return env, frozenset(lu_nodes_pre), frozenset(lu_nodes_post)
+
+  def RunPhase(self, phase):
+    """Run all the scripts for a phase.
+
+    This is the main function of the HookMaster.
+
+    """
+    if not self.node_list[phase]:
+      # empty node list, we should not attempt to run this
+      # as most probably we're in the cluster init phase and the rpc client
+      # part can't even attempt to run
+      return
+    self.env["GANETI_HOOKS_PHASE"] = str(phase)
+    results = self.callfn(self.node_list[phase], self.hpath, phase, self.env)
+    if phase == constants.HOOKS_PHASE_PRE:
+      errs = []
+      if not results:
+        raise errors.HooksFailure, "Communication failure"
+      for node_name in results:
+        res = results[node_name]
+        if res is False or not isinstance(res, list):
+          raise errors.HooksFailure, ("Communication failure to node %s" %
+                                      node_name)
+        for script, hkr, output in res:
+          if hkr == constants.HKR_FAIL:
+            output = output.strip().encode("string_escape")
+            errs.append((node_name, script, output))
+      if errs:
+        raise errors.HooksAbort, errs
diff --git a/lib/objects.py b/lib/objects.py
new file mode 100644
index 0000000000000000000000000000000000000000..3c8b2cc487314aa62e323f8767574406367ffc69
--- /dev/null
+++ b/lib/objects.py
@@ -0,0 +1,372 @@
+#!/usr/bin/python
+#
+
+# Copyright (C) 2006, 2007 Google Inc.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful, but
+# WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+# General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
+# 02110-1301, USA.
+
+
+"""Transportable objects for Ganeti.
+
+This module provides small, mostly data-only objects which are safe to
+pass to and from external parties.
+
+"""
+
+
+import cPickle
+from cStringIO import StringIO
+import ConfigParser
+
+from ganeti import errors
+
+
+__all__ = ["ConfigObject", "ConfigData", "NIC", "Disk", "Instance",
+           "OS", "Node", "Cluster"]
+
+
+class ConfigObject(object):
+  """A generic config object.
+
+  It has the following properties:
+
+    - provides somewhat safe recursive unpickling and pickling for its classes
+    - unset attributes which are defined in slots are always returned
+      as None instead of raising an error
+
+  Classes derived from this must always declare __slots__ (we use many
+  config objects and the memory reduction is useful.
+
+  """
+  __slots__ = []
+
+  def __init__(self, **kwargs):
+    for i in kwargs:
+      setattr(self, i, kwargs[i])
+
+  def __getattr__(self, name):
+    if name not in self.__slots__:
+      raise AttributeError, ("Invalid object attribute %s.%s" %
+                             (type(self).__name__, name))
+    return None
+
+  def __getstate__(self):
+    state = {}
+    for name in self.__slots__:
+      if hasattr(self, name):
+        state[name] = getattr(self, name)
+    return state
+
+  def __setstate__(self, state):
+    for name in state:
+      if name in self.__slots__:
+        setattr(self, name, state[name])
+
+  @staticmethod
+  def FindGlobal(module, name):
+    """Function filtering the allowed classes to be un-pickled.
+
+    Currently, we only allow the classes from this module which are
+    derived from ConfigObject.
+
+    """
+    # Also support the old module name (ganeti.config)
+    cls = None
+    if module == "ganeti.config" or module == "ganeti.objects":
+      if name == "ConfigData":
+        cls = ConfigData
+      elif name == "NIC":
+        cls = NIC
+      elif name == "Disk" or name == "BlockDev":
+        cls = Disk
+      elif name == "Instance":
+        cls = Instance
+      elif name == "OS":
+        cls = OS
+      elif name == "Node":
+        cls = Node
+      elif name == "Cluster":
+        cls = Cluster
+    if cls is None:
+      raise cPickle.UnpicklingError, ("Class %s.%s not allowed due to"
+                                      " security concerns" % (module, name))
+    return cls
+
+  def Dump(self, fobj):
+    """Dump this instance to a file object.
+
+    Note that we use the HIGHEST_PROTOCOL, as it brings benefits for
+    the new classes.
+
+    """
+    dumper = cPickle.Pickler(fobj, cPickle.HIGHEST_PROTOCOL)
+    dumper.dump(self)
+
+  @staticmethod
+  def Load(fobj):
+    """Unpickle data from the given stream.
+
+    This uses the `FindGlobal` function to filter the allowed classes.
+
+    """
+    loader = cPickle.Unpickler(fobj)
+    loader.find_global = ConfigObject.FindGlobal
+    return loader.load()
+
+  def Dumps(self):
+    """Dump this instance and return the string representation."""
+    buf = StringIO()
+    self.Dump(buf)
+    return buf.getvalue()
+
+  @staticmethod
+  def Loads(data):
+    """Load data from a string."""
+    return ConfigObject.Load(StringIO(data))
+
+
+class ConfigData(ConfigObject):
+  """Top-level config object."""
+  __slots__ = ["cluster", "nodes", "instances"]
+
+
+class NIC(ConfigObject):
+  """Config object representing a network card."""
+  __slots__ = ["mac", "ip", "bridge"]
+
+
+class Disk(ConfigObject):
+  """Config object representing a block device."""
+  __slots__ = ["dev_type", "logical_id", "physical_id",
+               "children", "iv_name", "size"]
+
+  def CreateOnSecondary(self):
+    """Test if this device needs to be created on a secondary node."""
+    return self.dev_type in ("drbd", "lvm")
+
+  def AssembleOnSecondary(self):
+    """Test if this device needs to be assembled on a secondary node."""
+    return self.dev_type in ("drbd", "lvm")
+
+  def OpenOnSecondary(self):
+    """Test if this device needs to be opened on a secondary node."""
+    return self.dev_type in ("lvm",)
+
+  def GetNodes(self, node):
+    """This function returns the nodes this device lives on.
+
+    Given the node on which the parent of the device lives on (or, in
+    case of a top-level device, the primary node of the devices'
+    instance), this function will return a list of nodes on which this
+    devices needs to (or can) be assembled.
+
+    """
+    if self.dev_type == "lvm" or self.dev_type == "md_raid1":
+      result = [node]
+    elif self.dev_type == "drbd":
+      result = [self.logical_id[0], self.logical_id[1]]
+      if node not in result:
+        raise errors.ConfigurationError, ("DRBD device passed unknown node")
+    else:
+      raise errors.ProgrammerError, "Unhandled device type %s" % self.dev_type
+    return result
+
+  def ComputeNodeTree(self, parent_node):
+    """Compute the node/disk tree for this disk and its children.
+
+    This method, given the node on which the parent disk lives, will
+    return the list of all (node, disk) pairs which describe the disk
+    tree in the most compact way. For example, a md/drbd/lvm stack
+    will be returned as (primary_node, md) and (secondary_node, drbd)
+    which represents all the top-level devices on the nodes. This
+    means that on the primary node we need to activate the the md (and
+    recursively all its children) and on the secondary node we need to
+    activate the drbd device (and its children, the two lvm volumes).
+
+    """
+    my_nodes = self.GetNodes(parent_node)
+    result = [(node, self) for node in my_nodes]
+    if not self.children:
+      # leaf device
+      return result
+    for node in my_nodes:
+      for child in self.children:
+        child_result = child.ComputeNodeTree(node)
+        if len(child_result) == 1:
+          # child (and all its descendants) is simple, doesn't split
+          # over multiple hosts, so we don't need to describe it, our
+          # own entry for this node describes it completely
+          continue
+        else:
+          # check if child nodes differ from my nodes; note that
+          # subdisk can differ from the child itself, and be instead
+          # one of its descendants
+          for subnode, subdisk in child_result:
+            if subnode not in my_nodes:
+              result.append((subnode, subdisk))
+            # otherwise child is under our own node, so we ignore this
+            # entry (but probably the other results in the list will
+            # be different)
+    return result
+
+
+class Instance(ConfigObject):
+  """Config object representing an instance."""
+  __slots__ = [
+    "name",
+    "primary_node",
+    "os",
+    "status",
+    "memory",
+    "vcpus",
+    "nics",
+    "disks",
+    "disk_template",
+    ]
+
+  def _ComputeSecondaryNodes(self):
+    """Compute the list of secondary nodes.
+
+    Since the data is already there (in the drbd disks), keeping it as
+    a separate normal attribute is redundant and if not properly
+    synchronised can cause problems. Thus it's better to compute it
+    dynamically.
+
+    """
+    def _Helper(primary, sec_nodes, device):
+      """Recursively computes secondary nodes given a top device."""
+      if device.dev_type == 'drbd':
+        nodea, nodeb, dummy = device.logical_id
+        if nodea == primary:
+          candidate = nodeb
+        else:
+          candidate = nodea
+        if candidate not in sec_nodes:
+          sec_nodes.append(candidate)
+      if device.children:
+        for child in device.children:
+          _Helper(primary, sec_nodes, child)
+
+    secondary_nodes = []
+    for device in self.disks:
+      _Helper(self.primary_node, secondary_nodes, device)
+    return tuple(secondary_nodes)
+
+  secondary_nodes = property(_ComputeSecondaryNodes, None, None,
+                             "List of secondary nodes")
+
+  def MapLVsByNode(self, lvmap=None, devs=None, node=None):
+    """Provide a mapping of nodes to LVs this instance owns.
+
+    This function figures out what logical volumes should belong on which
+    nodes, recursing through a device tree.
+
+    Args:
+      lvmap: (optional) a dictionary to receive the 'node' : ['lv', ...] data.
+
+    Returns:
+      None if lvmap arg is given.
+      Otherwise, { 'nodename' : ['volume1', 'volume2', ...], ... }
+
+    """
+
+    if node == None:
+      node = self.primary_node
+
+    if lvmap is None:
+      lvmap = { node : [] }
+      ret = lvmap
+    else:
+      if not node in lvmap:
+        lvmap[node] = []
+      ret = None
+
+    if not devs:
+      devs = self.disks
+
+    for dev in devs:
+      if dev.dev_type == "lvm":
+        lvmap[node].append(dev.logical_id[1])
+
+      elif dev.dev_type == "drbd":
+        if dev.logical_id[0] not in lvmap:
+          lvmap[dev.logical_id[0]] = []
+
+        if dev.logical_id[1] not in lvmap:
+          lvmap[dev.logical_id[1]] = []
+
+        if dev.children:
+          self.MapLVsByNode(lvmap, dev.children, dev.logical_id[0])
+          self.MapLVsByNode(lvmap, dev.children, dev.logical_id[1])
+
+      elif dev.children:
+        self.MapLVsByNode(lvmap, dev.children, node)
+
+    return ret
+
+
+class OS(ConfigObject):
+  """Config object representing an operating system."""
+  __slots__ = [
+    "name",
+    "path",
+    "api_version",
+    "create_script",
+    "export_script",
+    "import_script"
+    ]
+
+
+class Node(ConfigObject):
+  """Config object representing a node."""
+  __slots__ = ["name", "primary_ip", "secondary_ip"]
+
+
+class Cluster(ConfigObject):
+  """Config object representing the cluster."""
+  __slots__ = [
+    "config_version",
+    "serial_no",
+    "master_node",
+    "name",
+    "rsahostkeypub",
+    "highest_used_port",
+    "mac_prefix",
+    "volume_group_name",
+    "default_bridge",
+    ]
+
+class SerializableConfigParser(ConfigParser.SafeConfigParser):
+  """Simple wrapper over ConfigParse that allows serialization.
+
+  This class is basically ConfigParser.SafeConfigParser with two
+  additional methods that allow it to serialize/unserialize to/from a
+  buffer.
+
+  """
+  def Dumps(self):
+    """Dump this instance and return the string representation."""
+    buf = StringIO()
+    self.write(buf)
+    return buf.getvalue()
+
+  @staticmethod
+  def Loads(data):
+    """Load data from a string."""
+    buf = StringIO(data)
+    cfp = SerializableConfigParser()
+    cfp.readfp(buf)
+    return cfp
diff --git a/lib/opcodes.py b/lib/opcodes.py
new file mode 100644
index 0000000000000000000000000000000000000000..7d4d65f724bd4f768579cc1673ef66f3f6e0c918
--- /dev/null
+++ b/lib/opcodes.py
@@ -0,0 +1,229 @@
+#!/usr/bin/python
+#
+
+# Copyright (C) 2006, 2007 Google Inc.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful, but
+# WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+# General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
+# 02110-1301, USA.
+
+
+"""OpCodes module
+
+This module implements the data structures which define the cluster
+operations - the so-called opcodes.
+
+
+This module implements the logic for doing operations in the cluster. There
+are two kinds of classes defined:
+  - opcodes, which are small classes only holding data for the task at hand
+  - logical units, which know how to deal with their specific opcode only
+
+"""
+
+# this are practically structures, so disable the message about too
+# few public methods:
+# pylint: disable-msg=R0903
+
+class OpCode(object):
+  """Abstract OpCode"""
+  OP_ID = "OP_ABSTRACT"
+  __slots__ = []
+
+  def __init__(self, **kwargs):
+    for key in kwargs:
+      if key not in self.__slots__:
+        raise TypeError, ("OpCode %s doesn't support the parameter '%s'" %
+                          (self.__class__.__name__, key))
+      setattr(self, key, kwargs[key])
+
+
+class OpInitCluster(OpCode):
+  """Initialise the cluster."""
+  OP_ID = "OP_CLUSTER_INIT"
+  __slots__ = ["cluster_name", "secondary_ip", "hypervisor_type",
+               "vg_name", "mac_prefix", "def_bridge"]
+
+
+class OpDestroyCluster(OpCode):
+  """Destroy the cluster."""
+  OP_ID = "OP_CLUSTER_DESTROY"
+  __slots__ = []
+
+
+class OpQueryClusterInfo(OpCode):
+  """Initialise the cluster."""
+  OP_ID = "OP_CLUSTER_QUERY"
+  __slots__ = []
+
+
+class OpClusterCopyFile(OpCode):
+  """Initialise the cluster."""
+  OP_ID = "OP_CLUSTER_COPYFILE"
+  __slots__ = ["nodes", "filename"]
+
+
+class OpRunClusterCommand(OpCode):
+  """Initialise the cluster."""
+  OP_ID = "OP_CLUSTER_RUNCOMMAND"
+  __slots__ = ["nodes", "command"]
+
+
+class OpVerifyCluster(OpCode):
+  """Initialise the cluster."""
+  OP_ID = "OP_CLUSTER_VERIFY"
+  __slots__ = []
+
+
+class OpMasterFailover(OpCode):
+  """Initialise the cluster."""
+  OP_ID = "OP_CLUSTER_MASTERFAILOVER"
+  __slots__ = []
+
+
+class OpDumpClusterConfig(OpCode):
+  """Initialise the cluster."""
+  OP_ID = "OP_CLUSTER_DUMPCONFIG"
+  __slots__ = []
+
+
+class OpRemoveNode(OpCode):
+  """Remove a node."""
+  OP_ID = "OP_NODE_REMOVE"
+  __slots__ = ["node_name"]
+
+
+class OpAddNode(OpCode):
+  """Add a node."""
+  OP_ID = "OP_NODE_ADD"
+  __slots__ = ["node_name", "primary_ip", "secondary_ip"]
+
+
+class OpQueryNodes(OpCode):
+  """Compute the list of nodes."""
+  OP_ID = "OP_NODE_QUERY"
+  __slots__ = ["output_fields"]
+
+
+class OpQueryNodeData(OpCode):
+  """Compute the node info."""
+  OP_ID = "OP_NODE_INFO"
+  __slots__ = ["nodes"]
+
+
+# instance opcodes
+
+class OpCreateInstance(OpCode):
+  """Compute the list of instances."""
+  OP_ID = "OP_INSTANCE_CREATE"
+  __slots__ = ["instance_name", "mem_size", "disk_size", "os_type", "pnode",
+               "disk_template", "snode", "swap_size", "mode",
+               "vcpus", "ip", "bridge", "src_node", "src_path", "start",
+               "wait_for_sync"]
+
+
+class OpRemoveInstance(OpCode):
+  """Remove an instance."""
+  OP_ID = "OP_INSTANCE_REMOVE"
+  __slots__ = ["instance_name"]
+
+
+class OpStartupInstance(OpCode):
+  """Remove an instance."""
+  OP_ID = "OP_INSTANCE_STARTUP"
+  __slots__ = ["instance_name", "force", "extra_args"]
+
+
+class OpShutdownInstance(OpCode):
+  """Remove an instance."""
+  OP_ID = "OP_INSTANCE_SHUTDOWN"
+  __slots__ = ["instance_name"]
+
+
+class OpAddMDDRBDComponent(OpCode):
+  """Add a MD-DRBD component."""
+  OP_ID = "OP_INSTANCE_ADD_MDDRBD"
+  __slots__ = ["instance_name", "remote_node", "disk_name"]
+
+
+class OpRemoveMDDRBDComponent(OpCode):
+  """Remove a MD-DRBD component."""
+  OP_ID = "OP_INSTANCE_REMOVE_MDDRBD"
+  __slots__ = ["instance_name", "disk_name", "disk_id"]
+
+
+class OpReplaceDisks(OpCode):
+  """Replace disks of an instance."""
+  OP_ID = "OP_INSTANCE_REPLACE_DISKS"
+  __slots__ = ["instance_name", "remote_node"]
+
+
+class OpFailoverInstance(OpCode):
+  """Failover an instance."""
+  OP_ID = "OP_INSTANCE_FAILOVER"
+  __slots__ = ["instance_name", "ignore_consistency"]
+
+
+class OpConnectConsole(OpCode):
+  """Failover an instance."""
+  OP_ID = "OP_INSTANCE_CONSOLE"
+  __slots__ = ["instance_name"]
+
+
+class OpActivateInstanceDisks(OpCode):
+  """Remove an instance."""
+  OP_ID = "OP_INSTANCE_ACTIVATE_DISKS"
+  __slots__ = ["instance_name"]
+
+
+class OpDeactivateInstanceDisks(OpCode):
+  """Remove an instance."""
+  OP_ID = "OP_INSTANCE_DEACTIVATE_DISKS"
+  __slots__ = ["instance_name"]
+
+
+class OpQueryInstances(OpCode):
+  """Compute the list of instances."""
+  OP_ID = "OP_INSTANCE_QUERY"
+  __slots__ = ["output_fields"]
+
+
+class OpQueryInstanceData(OpCode):
+  """Compute the run-time status of instances."""
+  OP_ID = "OP_INSTANCE_QUERY_DATA"
+  __slots__ = ["instances"]
+
+
+class OpSetInstanceParms(OpCode):
+  """Change the parameters of an instance."""
+  OP_ID = "OP_INSTANCE_SET_PARMS"
+  __slots__ = ["instance_name", "mem", "vcpus", "ip", "bridge"]
+
+
+# OS opcodes
+class OpDiagnoseOS(OpCode):
+  """Compute the list of guest operating systems."""
+  OP_ID = "OP_OS_DIAGNOSE"
+  __slots__ = []
+
+# Exports opcodes
+class OpQueryExports(OpCode):
+  """Compute the list of exported images."""
+  OP_ID = "OP_BACKUP_QUERY"
+  __slots__ = ["nodes"]
+
+class OpExportInstance(OpCode):
+  """Export an instance."""
+  OP_ID = "OP_BACKUP_EXPORT"
+  __slots__ = ["instance_name", "target_node", "shutdown"]
diff --git a/lib/rpc.py b/lib/rpc.py
new file mode 100644
index 0000000000000000000000000000000000000000..ddb10ae8b991babae0a32b5af4b084f2b81fa495
--- /dev/null
+++ b/lib/rpc.py
@@ -0,0 +1,764 @@
+#!/usr/bin/python
+#
+
+# Copyright (C) 2006, 2007 Google Inc.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful, but
+# WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+# General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
+# 02110-1301, USA.
+
+
+"""Script to show add a new node to the cluster
+
+"""
+
+# pylint: disable-msg=C0103
+
+import os
+
+from twisted.internet.pollreactor import PollReactor
+
+class ReReactor(PollReactor):
+  """A re-startable Reactor implementation"""
+
+  def run(self, installSignalHandlers=1):
+    """Custom run method.
+
+    This is customized run that, before calling Reactor.run, will
+    reinstall the shutdown events and re-create the threadpool in case
+    these are not present (as will happen on the second run of the
+    reactor).
+
+    """
+    if not 'shutdown' in self._eventTriggers:
+      # the shutdown queue has been killed, we are most probably
+      # at the second run, thus recreate the queue
+      self.addSystemEventTrigger('during', 'shutdown', self.crash)
+      self.addSystemEventTrigger('during', 'shutdown', self.disconnectAll)
+    if self.threadpool is not None and self.threadpool.joined == 1:
+      # in case the threadpool has been stopped, re-start it
+      # and add a trigger to stop it at reactor shutdown
+      self.threadpool.start()
+      self.addSystemEventTrigger('during', 'shutdown', self.threadpool.stop)
+
+    return PollReactor.run(self, installSignalHandlers)
+
+
+import twisted.internet.main
+twisted.internet.main.installReactor(ReReactor())
+
+from twisted.spread import pb
+from twisted.internet import reactor
+from twisted.cred import credentials
+from OpenSSL import SSL, crypto
+
+from ganeti import logger
+from ganeti import utils
+from ganeti import errors
+from ganeti import constants
+from ganeti import objects
+from ganeti import ssconf
+
+class NodeController:
+  """Node-handling class.
+
+  For each node that we speak with, we create an instance of this
+  class, so that we have a safe place to store the details of this
+  individual call.
+
+  """
+  def __init__(self, parent, node):
+    self.parent = parent
+    self.node = node
+
+  def _check_end(self):
+    """Stop the reactor if we got all the results.
+
+    """
+    if len(self.parent.results) == len(self.parent.nc):
+      reactor.stop()
+
+  def cb_call(self, obj):
+    """Callback for successfull connect.
+
+    If the connect and login sequence succeeded, we proceed with
+    making the actual call.
+
+    """
+    deferred = obj.callRemote(self.parent.procedure, self.parent.args)
+    deferred.addCallbacks(self.cb_done, self.cb_err2)
+
+  def cb_done(self, result):
+    """Callback for successful call.
+
+    When we receive the result from a call, we check if it was an
+    error and if so we raise a generic RemoteError (we can't pass yet
+    the actual exception over). If there was no error, we store the
+    result.
+
+    """
+    tb, self.parent.results[self.node] = result
+    self._check_end()
+    if tb:
+      raise errors.RemoteError("Remote procedure error calling %s on %s:"
+                               "\n%s" % (self.parent.procedure,
+                                         self.node,
+                                         tb))
+
+  def cb_err1(self, reason):
+    """Error callback for unsuccessful connect.
+
+    """
+    logger.Error("caller_connect: could not connect to remote host %s,"
+                 " reason %s" % (self.node, reason))
+    self.parent.results[self.node] = False
+    self._check_end()
+
+  def cb_err2(self, reason):
+    """Error callback for unsuccessful call.
+
+    This is when the call didn't return anything, not even an error,
+    or when it time out, etc.
+
+    """
+    logger.Error("caller_call: could not call %s on node %s,"
+                 " reason %s" % (self.parent.procedure, self.node, reason))
+    self.parent.results[self.node] = False
+    self._check_end()
+
+
+class MirrorContextFactory:
+  """Certificate verifier factory.
+
+  This factory creates contexts that verify if the remote end has a
+  specific certificate (i.e. our own certificate).
+
+  The checks we do are that the PEM dump of the certificate is the
+  same as our own and (somewhat redundantly) that the SHA checksum is
+  the same.
+
+  """
+  isClient = 1
+
+  def __init__(self):
+    try:
+      fd = open(constants.SSL_CERT_FILE, 'r')
+      try:
+        data = fd.read(16384)
+      finally:
+        fd.close()
+    except EnvironmentError, err:
+      raise errors.ConfigurationError, ("missing SSL certificate: %s" %
+                                        str(err))
+    self.mycert = crypto.load_certificate(crypto.FILETYPE_PEM, data)
+    self.mypem = crypto.dump_certificate(crypto.FILETYPE_PEM, self.mycert)
+    self.mydigest = self.mycert.digest('SHA')
+
+  def verifier(self, conn, x509, errno, err_depth, retcode):
+    """Certificate verify method.
+
+    """
+    if self.mydigest != x509.digest('SHA'):
+      return False
+    if crypto.dump_certificate(crypto.FILETYPE_PEM, x509) != self.mypem:
+      return False
+    return True
+
+  def getContext(self):
+    """Context generator.
+
+    """
+    context = SSL.Context(SSL.TLSv1_METHOD)
+    context.set_verify(SSL.VERIFY_PEER, self.verifier)
+    return context
+
+class Client:
+  """RPC Client class.
+
+  This class, given a (remote) ethod name, a list of parameters and a
+  list of nodes, will contact (in parallel) all nodes, and return a
+  dict of results (key: node name, value: result).
+
+  One current bug is that generic failure is still signalled by
+  'False' result, which is not good. This overloading of values can
+  cause bugs.
+
+  """
+  result_set = False
+  result = False
+  allresult = []
+
+  def __init__(self, procedure, args):
+    ss = ssconf.SimpleStore()
+    self.port = ss.GetNodeDaemonPort()
+    self.nodepw = ss.GetNodeDaemonPassword()
+    self.nc = {}
+    self.results = {}
+    self.procedure = procedure
+    self.args = args
+
+  #--- generic connector -------------
+
+  def connect_list(self, node_list):
+    """Add a list of nodes to the target nodes.
+
+    """
+    for node in node_list:
+      self.connect(node)
+
+  def connect(self, connect_node):
+    """Add a node to the target list.
+
+    """
+    factory = pb.PBClientFactory()
+    self.nc[connect_node] = nc = NodeController(self, connect_node)
+    reactor.connectSSL(connect_node, self.port, factory,
+                       MirrorContextFactory())
+    #d = factory.getRootObject()
+    d = factory.login(credentials.UsernamePassword("master_node", self.nodepw))
+    d.addCallbacks(nc.cb_call, nc.cb_err1)
+
+  def getresult(self):
+    """Return the results of the call.
+
+    """
+    return self.results
+
+  def run(self):
+    """Wrapper over reactor.run().
+
+    This function simply calls reactor.run() if we have any requests
+    queued, otherwise it does nothing.
+
+    """
+    if self.nc:
+      reactor.run()
+
+
+def call_volume_list(node_list, vg_name):
+  """Gets the logical volumes present in a given volume group.
+
+  This is a multi-node call.
+
+  """
+  c = Client("volume_list", [vg_name])
+  c.connect_list(node_list)
+  c.run()
+  return c.getresult()
+
+
+def call_vg_list(node_list):
+  """Gets the volume group list.
+
+  This is a multi-node call.
+
+  """
+  c = Client("vg_list", [])
+  c.connect_list(node_list)
+  c.run()
+  return c.getresult()
+
+
+def call_bridges_exist(node, bridges_list):
+  """Checks if a node has all the bridges given.
+
+  This method checks if all bridges given in the bridges_list are
+  present on the remote node, so that an instance that uses interfaces
+  on those bridges can be started.
+
+  This is a single-node call.
+
+  """
+  c = Client("bridges_exist", [bridges_list])
+  c.connect(node)
+  c.run()
+  return c.getresult().get(node, False)
+
+
+def call_instance_start(node, instance, extra_args):
+  """Stars an instance.
+
+  This is a single-node call.
+
+  """
+  c = Client("instance_start", [instance.Dumps(), extra_args])
+  c.connect(node)
+  c.run()
+  return c.getresult().get(node, False)
+
+
+def call_instance_shutdown(node, instance):
+  """Stops an instance.
+
+  This is a single-node call.
+
+  """
+  c = Client("instance_shutdown", [instance.Dumps()])
+  c.connect(node)
+  c.run()
+  return c.getresult().get(node, False)
+
+
+def call_instance_os_add(node, inst, osdev, swapdev):
+  """Installs an OS on the given instance.
+
+  This is a single-node call.
+
+  """
+  params = [inst.Dumps(), osdev, swapdev]
+  c = Client("instance_os_add", params)
+  c.connect(node)
+  c.run()
+  return c.getresult().get(node, False)
+
+
+def call_instance_info(node, instance):
+  """Returns information about a single instance.
+
+  This is a single-node call.
+
+  """
+  c = Client("instance_info", [instance])
+  c.connect(node)
+  c.run()
+  return c.getresult().get(node, False)
+
+
+def call_all_instances_info(node_list):
+  """Returns information about all instances on a given node.
+
+  This is a single-node call.
+
+  """
+  c = Client("all_instances_info", [])
+  c.connect_list(node_list)
+  c.run()
+  return c.getresult()
+
+
+def call_instance_list(node_list):
+  """Returns the list of running instances on a given node.
+
+  This is a single-node call.
+
+  """
+  c = Client("instance_list", [])
+  c.connect_list(node_list)
+  c.run()
+  return c.getresult()
+
+
+def call_node_info(node_list, vg_name):
+  """Return node information.
+
+  This will return memory information and volume group size and free
+  space.
+
+  This is a multi-node call.
+
+  """
+  c = Client("node_info", [vg_name])
+  c.connect_list(node_list)
+  c.run()
+  retux = c.getresult()
+
+  for node_name in retux:
+    ret = retux.get(node_name, False)
+    if type(ret) != dict:
+      logger.Error("could not connect to node %s" % (node_name))
+      ret = {}
+
+    utils.CheckDict(ret,
+                    { 'memory_total' : '-',
+                      'memory_dom0' : '-',
+                      'memory_free' : '-',
+                      'vg_size' : 'node_unreachable',
+                      'vg_free' : '-' },
+                    "call_node_info",
+                    )
+  return retux
+
+
+def call_node_add(node, dsa, dsapub, rsa, rsapub, ssh, sshpub):
+  """Add a node to the cluster.
+
+  This is a single-node call.
+
+  """
+  params = [dsa, dsapub, rsa, rsapub, ssh, sshpub]
+  c = Client("node_add", params)
+  c.connect(node)
+  c.run()
+  return c.getresult().get(node, False)
+
+
+def call_node_verify(node_list, checkdict):
+  """Request verification of given parameters.
+
+  This is a multi-node call.
+
+  """
+  c = Client("node_verify", [checkdict])
+  c.connect_list(node_list)
+  c.run()
+  return c.getresult()
+
+
+def call_node_start_master(node):
+  """Tells a node to activate itself as a master.
+
+  This is a single-node call.
+
+  """
+  c = Client("node_start_master", [])
+  c.connect(node)
+  c.run()
+  return c.getresult().get(node, False)
+
+
+def call_node_stop_master(node):
+  """Tells a node to demote itself from master status.
+
+  This is a single-node call.
+
+  """
+  c = Client("node_stop_master", [])
+  c.connect(node)
+  c.run()
+  return c.getresult().get(node, False)
+
+
+def call_version(node_list):
+  """Query node version.
+
+  This is a multi-node call.
+
+  """
+  c = Client("version", [])
+  c.connect_list(node_list)
+  c.run()
+  return c.getresult()
+
+
+def call_configfile_list(node_list):
+  """Return list of existing configuration files.
+
+  This is a multi-node call.
+
+  """
+  c = Client("configfile_list", [])
+  c.connect_list(node_list)
+  c.run()
+  return c.getresult()
+
+def call_blockdev_create(node, bdev, size, on_primary):
+  """Request creation of a given block device.
+
+  This is a single-node call.
+
+  """
+  params = [bdev.Dumps(), size, on_primary]
+  c = Client("blockdev_create", params)
+  c.connect(node)
+  c.run()
+  return c.getresult().get(node, False)
+
+
+def call_blockdev_remove(node, bdev):
+  """Request removal of a given block device.
+
+  This is a single-node call.
+
+  """
+  c = Client("blockdev_remove", [bdev.Dumps()])
+  c.connect(node)
+  c.run()
+  return c.getresult().get(node, False)
+
+
+def call_blockdev_assemble(node, disk, on_primary):
+  """Request assembling of a given block device.
+
+  This is a single-node call.
+
+  """
+  params = [disk.Dumps(), on_primary]
+  c = Client("blockdev_assemble", params)
+  c.connect(node)
+  c.run()
+  return c.getresult().get(node, False)
+
+
+def call_blockdev_shutdown(node, disk):
+  """Request shutdown of a given block device.
+
+  This is a single-node call.
+
+  """
+  c = Client("blockdev_shutdown", [disk.Dumps()])
+  c.connect(node)
+  c.run()
+  return c.getresult().get(node, False)
+
+
+def call_blockdev_addchild(node, bdev, ndev):
+  """Request adding a new child to a (mirroring) device.
+
+  This is a single-node call.
+
+  """
+  params = [bdev.Dumps(), ndev.Dumps()]
+  c = Client("blockdev_addchild", params)
+  c.connect(node)
+  c.run()
+  return c.getresult().get(node, False)
+
+
+def call_blockdev_removechild(node, bdev, ndev):
+  """Request removing a new child from a (mirroring) device.
+
+  This is a single-node call.
+
+  """
+  params = [bdev.Dumps(), ndev.Dumps()]
+  c = Client("blockdev_removechild", params)
+  c.connect(node)
+  c.run()
+  return c.getresult().get(node, False)
+
+
+def call_blockdev_getmirrorstatus(node, disks):
+  """Request status of a (mirroring) device.
+
+  This is a single-node call.
+
+  """
+  params = [dsk.Dumps() for dsk in disks]
+  c = Client("blockdev_getmirrorstatus", params)
+  c.connect(node)
+  c.run()
+  return c.getresult().get(node, False)
+
+
+def call_blockdev_find(node, disk):
+  """Request identification of a given block device.
+
+  This is a single-node call.
+
+  """
+  c = Client("blockdev_find", [disk.Dumps()])
+  c.connect(node)
+  c.run()
+  return c.getresult().get(node, False)
+
+
+def call_upload_file(node_list, file_name):
+  """Upload a file.
+
+  The node will refuse the operation in case the file is not on the
+  approved file list.
+
+  This is a multi-node call.
+
+  """
+  fh = file(file_name)
+  try:
+    data = fh.read()
+  finally:
+    fh.close()
+  st = os.stat(file_name)
+  params = [file_name, data, st.st_mode, st.st_uid, st.st_gid,
+            st.st_atime, st.st_mtime]
+  c = Client("upload_file", params)
+  c.connect_list(node_list)
+  c.run()
+  return c.getresult()
+
+
+def call_os_diagnose(node_list):
+  """Request a diagnose of OS definitions.
+
+  This is a multi-node call.
+
+  """
+  c = Client("os_diagnose", [])
+  c.connect_list(node_list)
+  c.run()
+  result = c.getresult()
+  new_result = {}
+  for node_name in result:
+    nr = []
+    if result[node_name]:
+      for data in result[node_name]:
+        if data:
+          if isinstance(data, basestring):
+            nr.append(objects.ConfigObject.Loads(data))
+          elif isinstance(data, tuple) and len(data) == 2:
+            nr.append(errors.InvalidOS(data[0], data[1]))
+          else:
+            raise errors.ProgrammerError, ("Invalid data from"
+                                           " xcserver.os_diagnose")
+    new_result[node_name] = nr
+  return new_result
+
+
+def call_os_get(node_list, name):
+  """Returns an OS definition.
+
+  This is a multi-node call.
+
+  """
+  c = Client("os_get", [name])
+  c.connect_list(node_list)
+  c.run()
+  result = c.getresult()
+  new_result = {}
+  for node_name in result:
+    data = result[node_name]
+    if isinstance(data, basestring):
+      new_result[node_name] = objects.ConfigObject.Loads(data)
+    elif isinstance(data, tuple) and len(data) == 2:
+      new_result[node_name] = errors.InvalidOS(data[0], data[1])
+    else:
+      new_result[node_name] = data
+  return new_result
+
+
+def call_hooks_runner(node_list, hpath, phase, env):
+  """Call the hooks runner.
+
+  Args:
+    - op: the OpCode instance
+    - env: a dictionary with the environment
+
+  This is a multi-node call.
+
+  """
+  params = [hpath, phase, env]
+  c = Client("hooks_runner", params)
+  c.connect_list(node_list)
+  c.run()
+  result = c.getresult()
+  return result
+
+
+def call_blockdev_snapshot(node, cf_bdev):
+  """Request a snapshot of the given block device.
+
+  This is a single-node call.
+
+  """
+  c = Client("blockdev_snapshot", [cf_bdev.Dumps()])
+  c.connect(node)
+  c.run()
+  return c.getresult().get(node, False)
+
+
+def call_snapshot_export(node, snap_bdev, dest_node, instance):
+  """Request the export of a given snapshot.
+
+  This is a single-node call.
+
+  """
+  params = [snap_bdev.Dumps(), dest_node, instance.Dumps()]
+  c = Client("snapshot_export", params)
+  c.connect(node)
+  c.run()
+  return c.getresult().get(node, False)
+
+
+def call_finalize_export(node, instance, snap_disks):
+  """Request the completion of an export operation.
+
+  This writes the export config file, etc.
+
+  This is a single-node call.
+
+  """
+  flat_disks = []
+  for disk in snap_disks:
+    flat_disks.append(disk.Dumps())
+  params = [instance.Dumps(), flat_disks]
+  c = Client("finalize_export", params)
+  c.connect(node)
+  c.run()
+  return c.getresult().get(node, False)
+
+
+def call_export_info(node, path):
+  """Queries the export information in a given path.
+
+  This is a single-node call.
+
+  """
+  c = Client("export_info", [path])
+  c.connect(node)
+  c.run()
+  result = c.getresult().get(node, False)
+  if not result:
+    return result
+  return objects.SerializableConfigParser.Loads(result)
+
+
+def call_instance_os_import(node, inst, osdev, swapdev, src_node, src_image):
+  """Request the import of a backup into an instance.
+
+  This is a single-node call.
+
+  """
+  params = [inst.Dumps(), osdev, swapdev, src_node, src_image]
+  c = Client("instance_os_import", params)
+  c.connect(node)
+  c.run()
+  return c.getresult().get(node, False)
+
+
+def call_export_list(node_list):
+  """Gets the stored exports list.
+
+  This is a multi-node call.
+
+  """
+  c = Client("export_list", [])
+  c.connect_list(node_list)
+  c.run()
+  result = c.getresult()
+  return result
+
+
+def call_export_remove(node, export):
+  """Requests removal of a given export.
+
+  This is a single-node call.
+
+  """
+  c = Client("export_remove", [export])
+  c.connect(node)
+  c.run()
+  return c.getresult().get(node, False)
+
+
+def call_node_leave_cluster(node):
+  """Requests a node to clean the cluster information it has.
+
+  This will remove the configuration information from the ganeti data
+  dir.
+
+  This is a single-node call.
+
+  """
+  c = Client("node_leave_cluster", [])
+  c.connect(node)
+  c.run()
+  return c.getresult().get(node, False)
diff --git a/lib/ssconf.py b/lib/ssconf.py
new file mode 100644
index 0000000000000000000000000000000000000000..fd41974b5465448d5e10c9fce4c620c17efc8b34
--- /dev/null
+++ b/lib/ssconf.py
@@ -0,0 +1,163 @@
+#!/usr/bin/python
+#
+
+# Copyright (C) 2006, 2007 Google Inc.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful, but
+# WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+# General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
+# 02110-1301, USA.
+
+
+"""Global Configuration data for Ganeti.
+
+This module provides the interface to a special case of cluster
+configuration data, which is mostly static and available to all nodes.
+
+"""
+
+import os
+import tempfile
+import errno
+import socket
+
+from ganeti import errors
+from ganeti import constants
+
+
+class SimpleStore:
+  """Interface to static cluster data.
+
+  This is different that the config.ConfigWriter class in that it
+  holds data that is (mostly) constant after the cluster
+  initialization. Its purpose is to allow limited customization of
+  things which would otherwise normally live in constants.py. Note
+  that this data cannot live in ConfigWriter as that is available only
+  on the master node, and our data must be readable by both the master
+  and the nodes.
+
+  Other particularities of the datastore:
+    - keys are restricted to predefined values
+    - values are small (<4k)
+    - since the data is practically static, read keys are cached in memory
+    - some keys are handled specially (read from the system, so
+      we can't update them)
+
+  """
+  _SS_FILEPREFIX = "ssconf_"
+  SS_HYPERVISOR = "hypervisor"
+  SS_NODED_PASS = "node_pass"
+  _VALID_KEYS = (SS_HYPERVISOR, SS_NODED_PASS,)
+  _MAX_SIZE = 4096
+
+  def __init__(self, cfg_location=None):
+    if cfg_location is None:
+      self._cfg_dir = constants.DATA_DIR
+    else:
+      self._cfg_dir = cfg_location
+    self._cache = {}
+
+  def KeyToFilename(self, key):
+    """Convert a given key into filename.
+
+    """
+    if key not in self._VALID_KEYS:
+      raise errors.ProgrammerError, ("Invalid key requested from SSConf: '%s'"
+                                     % str(key))
+
+    filename = self._cfg_dir + '/' + self._SS_FILEPREFIX + key
+    return filename
+
+  def _ReadFile(self, key):
+    """Generic routine to read keys.
+
+    This will read the file which holds the value requested. Errors
+    will be changed into ConfigurationErrors.
+
+    """
+    if key in self._cache:
+      return self._cache[key]
+    filename = self.KeyToFilename(key)
+    try:
+      fh = file(filename, 'r')
+      try:
+        data = fh.readline(self._MAX_SIZE)
+        data = data.rstrip('\n')
+      finally:
+        fh.close()
+    except EnvironmentError, err:
+      raise errors.ConfigurationError, ("Can't read from the ssconf file:"
+                                        " '%s'" % str(err))
+    self._cache[key] = data
+    return data
+
+  def GetNodeDaemonPort(self):
+    """Get the node daemon port for this cluster.
+
+    Note that this routine does not read a ganeti-specific file, but
+    instead uses socket.getservbyname to allow pre-customization of
+    this parameter outside of ganeti.
+
+    """
+    try:
+      port = socket.getservbyname("ganeti-noded", "tcp")
+    except socket.error:
+      port = constants.DEFAULT_NODED_PORT
+
+    return port
+
+  def GetHypervisorType(self):
+    """Get the hypervisor type for this cluster.
+
+    """
+    return self._ReadFile(self.SS_HYPERVISOR)
+
+  def GetNodeDaemonPassword(self):
+    """Get the node password for this cluster.
+
+    """
+    return self._ReadFile(self.SS_NODED_PASS)
+
+  def SetKey(self, key, value):
+    """Set the value of a key.
+
+    This should be used only when adding a node to a cluster.
+
+    """
+    file_name = self.KeyToFilename(key)
+    dir_name, small_name = os.path.split(file_name)
+    fd, new_name = tempfile.mkstemp('.new', small_name, dir_name)
+    # here we need to make sure we remove the temp file, if any error
+    # leaves it in place
+    try:
+      os.chown(new_name, 0, 0)
+      os.chmod(new_name, 0400)
+      os.write(fd, "%s\n" % str(value))
+      os.fsync(fd)
+      os.rename(new_name, file_name)
+      self._cache[key] = value
+    finally:
+      os.close(fd)
+      try:
+        os.unlink(new_name)
+      except OSError, err:
+        if err.errno != errno.ENOENT:
+          raise
+
+  def GetFileList(self):
+    """Return the lis of all config files.
+
+    This is used for computing node replication data.
+
+    """
+    return [self.KeyToFilename(key) for key in self._VALID_KEYS]
diff --git a/lib/ssh.py b/lib/ssh.py
new file mode 100644
index 0000000000000000000000000000000000000000..4a1f3a2d8431839ffa9ca26404888bf2d490e42b
--- /dev/null
+++ b/lib/ssh.py
@@ -0,0 +1,131 @@
+#!/usr/bin/python
+#
+
+# Copyright (C) 2006, 2007 Google Inc.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful, but
+# WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+# General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
+# 02110-1301, USA.
+
+
+"""Module encapsulating ssh functionality.
+
+"""
+
+
+import os
+
+from ganeti import logger
+from ganeti import utils
+from ganeti import errors
+
+def SSHCall(hostname, user, command, batch=True, ask_key=False):
+  """Execute a command on a remote node.
+
+  This method has the same return value as `utils.RunCmd()`, which it
+  uses to launch ssh.
+
+  Args:
+    hostname: the target host, string
+    user: user to auth as
+    command: the command
+
+  Returns:
+    `utils.RunResult` as for `utils.RunCmd()`
+
+  """
+
+  argv = ["ssh", "-q", "-oEscapeChar=none"]
+  if batch:
+    argv.append("-oBatchMode=yes")
+    # if we are in batch mode, we can't ask the key
+    if ask_key:
+      raise errors.ProgrammerError, ("SSH call requested conflicting options")
+  if ask_key:
+    argv.append("-oStrictHostKeyChecking=ask")
+  else:
+    argv.append("-oStrictHostKeyChecking=yes")
+  argv.extend(["%s@%s" % (user, hostname), command])
+  return utils.RunCmd(argv)
+
+
+def CopyFileToNode(node, filename):
+  """Copy a file to another node with scp.
+
+  Args:
+    node: node in the cluster
+    filename: absolute pathname of a local file
+
+  Returns:
+    success: True/False
+
+  """
+  if not os.path.isfile(filename):
+    logger.Error("file %s does not exist" % (filename))
+    return False
+
+  if not os.path.isabs(filename):
+    logger.Error("file %s must be an absolute path" % (filename))
+    return False
+
+  command = ["scp", "-q", "-p", "-oStrictHostKeyChecking=yes",
+             "-oBatchMode=yes", filename, "%s:%s" % (node, filename)]
+
+  result = utils.RunCmd(command)
+
+  if result.failed:
+    logger.Error("copy to node %s failed (%s) error %s,"
+                 " command was %s" %
+                 (node, result.fail_reason, result.output, result.cmd))
+
+  return not result.failed
+
+
+def VerifyNodeHostname(node):
+  """Verify hostname consistency via SSH.
+
+
+  This functions connects via ssh to a node and compares the hostname
+  reported by the node to the name with have (the one that we
+  connected to).
+
+  This is used to detect problems in ssh known_hosts files
+  (conflicting known hosts) and incosistencies between dns/hosts
+  entries and local machine names
+
+  Args:
+    node: nodename of a host to check. can be short or full qualified hostname
+
+  Returns:
+    (success, detail)
+    where
+      success: True/False
+      detail: String with details
+
+  """
+  retval = SSHCall(node, 'root', 'hostname')
+
+  if retval.failed:
+    msg = "ssh problem"
+    output = retval.output
+    if output:
+      msg += ": %s" % output
+    return False, msg
+
+  remotehostname = retval.stdout.strip()
+
+  if not remotehostname or remotehostname != node:
+    return False, "hostname mismatch, got %s" % remotehostname
+
+  return True, "host matches"
diff --git a/lib/utils.py b/lib/utils.py
new file mode 100644
index 0000000000000000000000000000000000000000..0df99a69e545ee6515de52857071a73656317d49
--- /dev/null
+++ b/lib/utils.py
@@ -0,0 +1,748 @@
+#!/usr/bin/python
+#
+
+# Copyright (C) 2006, 2007 Google Inc.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful, but
+# WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+# General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
+# 02110-1301, USA.
+
+
+"""Ganeti small utilities
+"""
+
+
+import sys
+import os
+import sha
+import time
+import popen2
+import re
+import socket
+import tempfile
+import shutil
+from errno import ENOENT, ENOTDIR, EISDIR, EEXIST
+
+from ganeti import logger
+from ganeti import errors
+
+_locksheld = []
+_re_shell_unquoted = re.compile('^[-.,=:/_+@A-Za-z0-9]+$')
+
+class RunResult(object):
+  """Simple class for holding the result of running external programs.
+
+  Instance variables:
+    exit_code: the exit code of the program, or None (if the program
+               didn't exit())
+    signal: numeric signal that caused the program to finish, or None
+            (if the program wasn't terminated by a signal)
+    stdout: the standard output of the program
+    stderr: the standard error of the program
+    failed: a Boolean value which is True in case the program was
+            terminated by a signal or exited with a non-zero exit code
+    fail_reason: a string detailing the termination reason
+
+  """
+  __slots__ = ["exit_code", "signal", "stdout", "stderr",
+               "failed", "fail_reason", "cmd"]
+
+
+  def __init__(self, exit_code, signal, stdout, stderr, cmd):
+    self.cmd = cmd
+    self.exit_code = exit_code
+    self.signal = signal
+    self.stdout = stdout
+    self.stderr = stderr
+    self.failed = (signal is not None or exit_code != 0)
+
+    if self.signal is not None:
+      self.fail_reason = "terminated by signal %s" % self.signal
+    elif self.exit_code is not None:
+      self.fail_reason = "exited with exit code %s" % self.exit_code
+    else:
+      self.fail_reason = "unable to determine termination reason"
+
+  def _GetOutput(self):
+    """Returns the combined stdout and stderr for easier usage.
+
+    """
+    return self.stdout + self.stderr
+
+  output = property(_GetOutput, None, None, "Return full output")
+
+
+def _GetLockFile(subsystem):
+  """Compute the file name for a given lock name."""
+  return "/var/lock/ganeti_lock_%s" % subsystem
+
+
+def Lock(name, max_retries=None, debug=False):
+  """Lock a given subsystem.
+
+  In case the lock is already held by an alive process, the function
+  will sleep indefintely and poll with a one second interval.
+
+  When the optional integer argument 'max_retries' is passed with a
+  non-zero value, the function will sleep only for this number of
+  times, and then it will will raise a LockError if the lock can't be
+  acquired. Passing in a negative number will cause only one try to
+  get the lock. Passing a positive number will make the function retry
+  for approximately that number of seconds.
+
+  """
+  lockfile = _GetLockFile(name)
+
+  if name in _locksheld:
+    raise errors.LockError('Lock "%s" already held!' % (name,))
+
+  errcount = 0
+
+  retries = 0
+  while True:
+    try:
+      fd = os.open(lockfile, os.O_CREAT | os.O_EXCL | os.O_RDWR | os.O_SYNC)
+      break
+    except OSError, creat_err:
+      if creat_err.errno != EEXIST:
+        raise errors.LockError, ("Can't create the lock file. Error '%s'." %
+                                 str(creat_err))
+
+      try:
+        pf = open(lockfile, 'r')
+      except IOError, open_err:
+        errcount += 1
+        if errcount >= 5:
+          raise errors.LockError, ("Lock file exists but cannot be opened."
+                                   " Error: '%s'." % str(open_err))
+        time.sleep(1)
+        continue
+
+      try:
+        pid = int(pf.read())
+      except ValueError:
+        raise errors.LockError('Invalid pid string in %s' %
+                               (lockfile,))
+
+      if not IsProcessAlive(pid):
+        raise errors.LockError, ('Stale lockfile %s for pid %d?' %
+                                 (lockfile, pid))
+
+      if max_retries and max_retries <= retries:
+        raise errors.LockError, ("Can't acquire lock during the specified"
+                                 " time, aborting.")
+      if retries == 5 and (debug or sys.stdin.isatty()):
+        logger.ToStderr("Waiting for '%s' lock from pid %d..." % (name, pid))
+
+      time.sleep(1)
+      retries += 1
+      continue
+
+  os.write(fd, '%d\n' % (os.getpid(),))
+  os.close(fd)
+
+  _locksheld.append(name)
+
+
+def Unlock(name):
+  """Unlock a given subsystem."""
+
+  lockfile = _GetLockFile(name)
+
+  try:
+    fd = os.open(lockfile, os.O_RDONLY)
+  except OSError:
+    raise errors.LockError('Lock "%s" not held.' % (name,))
+
+  f = os.fdopen(fd, 'r')
+  pid_str = f.read()
+
+  try:
+    pid = int(pid_str)
+  except ValueError:
+    raise errors.LockError('Unable to determine PID of locking process.')
+
+  if pid != os.getpid():
+    raise errors.LockError('Lock not held by me (%d != %d)' %
+                           (os.getpid(), pid,))
+
+  os.unlink(lockfile)
+  _locksheld.remove(name)
+
+
+def LockCleanup():
+  """Remove all locks."""
+
+  for lock in _locksheld:
+    Unlock(lock)
+
+
+def RunCmd(cmd):
+  """Execute a (shell) command.
+
+  The command should not read from its standard input, as it will be
+  closed.
+
+  Args:
+    cmd: command to run. (str)
+
+  Returns: `RunResult` instance
+
+  """
+  if isinstance(cmd, list):
+    cmd = [str(val) for val in cmd]
+  child = popen2.Popen3(cmd, capturestderr=True)
+
+  child.tochild.close()
+  out = child.fromchild.read()
+  err = child.childerr.read()
+
+  status = child.wait()
+  if os.WIFSIGNALED(status):
+    signal = os.WTERMSIG(status)
+  else:
+    signal = None
+  if os.WIFEXITED(status):
+    exitcode = os.WEXITSTATUS(status)
+  else:
+    exitcode = None
+
+  if isinstance(cmd, list):
+    strcmd = " ".join(cmd)
+  else:
+    strcmd = str(cmd)
+
+  return RunResult(exitcode, signal, out, err, strcmd)
+
+
+def RunCmdUnlocked(cmd):
+  """Execute a shell command without the 'cmd' lock.
+
+  This variant of `RunCmd()` drops the 'cmd' lock before running the
+  command and re-aquires it afterwards, thus it can be used to call
+  other ganeti commands.
+
+  The argument and return values are the same as for the `RunCmd()`
+  function.
+
+  Args:
+    cmd - command to run. (str)
+
+  Returns:
+    `RunResult`
+
+  """
+  Unlock('cmd')
+  ret = RunCmd(cmd)
+  Lock('cmd')
+
+  return ret
+
+
+def RemoveFile(filename):
+  """Remove a file ignoring some errors.
+
+  Remove a file, ignoring non-existing ones or directories. Other
+  errors are passed.
+
+  """
+  try:
+    os.unlink(filename)
+  except OSError, err:
+    if err.errno not in (ENOENT, EISDIR):
+      raise
+
+
+def _FingerprintFile(filename):
+  """Compute the fingerprint of a file.
+
+  If the file does not exist, a None will be returned
+  instead.
+
+  Args:
+    filename - Filename (str)
+
+  """
+  if not (os.path.exists(filename) and os.path.isfile(filename)):
+    return None
+
+  f = open(filename)
+
+  fp = sha.sha()
+  while True:
+    data = f.read(4096)
+    if not data:
+      break
+
+    fp.update(data)
+
+  return fp.hexdigest()
+
+
+def FingerprintFiles(files):
+  """Compute fingerprints for a list of files.
+
+  Args:
+    files - array of filenames.  ( [str, ...] )
+
+  Return value:
+    dictionary of filename: fingerprint for the files that exist
+
+  """
+  ret = {}
+
+  for filename in files:
+    cksum = _FingerprintFile(filename)
+    if cksum:
+      ret[filename] = cksum
+
+  return ret
+
+
+def CheckDict(target, template, logname=None):
+  """Ensure a dictionary has a required set of keys.
+
+  For the given dictionaries `target` and `template`, ensure target
+  has all the keys from template. Missing keys are added with values
+  from template.
+
+  Args:
+    target   - the dictionary to check
+    template - template dictionary
+    logname  - a caller-chosen string to identify the debug log
+               entry; if None, no logging will be done
+
+  Returns value:
+    None
+
+  """
+  missing = []
+  for k in template:
+    if k not in target:
+      missing.append(k)
+      target[k] = template[k]
+
+  if missing and logname:
+    logger.Debug('%s missing keys %s' %
+                 (logname, ', '.join(missing)))
+
+
+def IsProcessAlive(pid):
+  """Check if a given pid exists on the system.
+
+  Returns: true or false, depending on if the pid exists or not
+
+  Remarks: zombie processes treated as not alive
+
+  """
+  try:
+    f = open("/proc/%d/status" % pid)
+  except IOError, err:
+    if err.errno in (ENOENT, ENOTDIR):
+      return False
+
+  alive = True
+  try:
+    data = f.readlines()
+    if len(data) > 1:
+      state = data[1].split()
+      if len(state) > 1 and state[1] == "Z":
+        alive = False
+  finally:
+    f.close()
+
+  return alive
+
+
+def MatchNameComponent(key, name_list):
+  """Try to match a name against a list.
+
+  This function will try to match a name like test1 against a list
+  like ['test1.example.com', 'test2.example.com', ...]. Against this
+  list, 'test1' as well as 'test1.example' will match, but not
+  'test1.ex'. A multiple match will be considered as no match at all
+  (e.g. 'test1' against ['test1.example.com', 'test1.example.org']).
+
+  Args:
+    key: the name to be searched
+    name_list: the list of strings against which to search the key
+
+  Returns:
+    None if there is no match *or* if there are multiple matches
+    otherwise the element from the list which matches
+
+  """
+  mo = re.compile("^%s(\..*)?$" % re.escape(key))
+  names_filtered = [name for name in name_list if mo.match(name) is not None]
+  if len(names_filtered) != 1:
+    return None
+  return names_filtered[0]
+
+
+def LookupHostname(hostname):
+  """Look up hostname
+
+  Args:
+    hostname: hostname to look up, can be also be a non FQDN
+
+  Returns:
+    Dictionary with keys:
+    - ip: IP addr
+    - hostname_full: hostname fully qualified
+    - hostname: hostname fully qualified (historic artifact)
+  """
+
+  try:
+    (fqdn, dummy, ipaddrs) = socket.gethostbyname_ex(hostname)
+    ipaddr = ipaddrs[0]
+  except socket.gaierror:
+    # hostname not found in DNS
+    return None
+
+  returnhostname = {
+    "ip": ipaddr,
+    "hostname_full": fqdn,
+    "hostname": fqdn,
+    }
+
+  return returnhostname
+
+
+def ListVolumeGroups():
+  """List volume groups and their size
+
+  Returns:
+     Dictionary with keys volume name and values the size of the volume
+
+  """
+  command = "vgs --noheadings --units m --nosuffix -o name,size"
+  result = RunCmd(command)
+  retval = {}
+  if result.failed:
+    return retval
+
+  for line in result.stdout.splitlines():
+    try:
+      name, size = line.split()
+      size = int(float(size))
+    except (IndexError, ValueError), err:
+      logger.Error("Invalid output from vgs (%s): %s" % (err, line))
+      continue
+
+    retval[name] = size
+
+  return retval
+
+
+def BridgeExists(bridge):
+  """Check whether the given bridge exists in the system
+
+  Returns:
+     True if it does, false otherwise.
+
+  """
+
+  return os.path.isdir("/sys/class/net/%s/bridge" % bridge)
+
+
+def NiceSort(name_list):
+  """Sort a list of strings based on digit and non-digit groupings.
+
+  Given a list of names ['a1', 'a10', 'a11', 'a2'] this function will
+  sort the list in the logical order ['a1', 'a2', 'a10', 'a11'].
+
+  The sort algorithm breaks each name in groups of either only-digits
+  or no-digits. Only the first eight such groups are considered, and
+  after that we just use what's left of the string.
+
+  Return value
+    - a copy of the list sorted according to our algorithm
+
+  """
+  _SORTER_BASE = "(\D+|\d+)"
+  _SORTER_FULL = "^%s%s?%s?%s?%s?%s?%s?%s?.*$" % (_SORTER_BASE, _SORTER_BASE,
+                                                  _SORTER_BASE, _SORTER_BASE,
+                                                  _SORTER_BASE, _SORTER_BASE,
+                                                  _SORTER_BASE, _SORTER_BASE)
+  _SORTER_RE = re.compile(_SORTER_FULL)
+  _SORTER_NODIGIT = re.compile("^\D*$")
+  def _TryInt(val):
+    """Attempts to convert a variable to integer."""
+    if val is None or _SORTER_NODIGIT.match(val):
+      return val
+    rval = int(val)
+    return rval
+
+  to_sort = [([_TryInt(grp) for grp in _SORTER_RE.match(name).groups()], name)
+             for name in name_list]
+  to_sort.sort()
+  return [tup[1] for tup in to_sort]
+
+
+def CheckDaemonAlive(pid_file, process_string):
+  """Check wether the specified daemon is alive.
+
+  Args:
+   - pid_file: file to read the daemon pid from, the file is
+               expected to contain only a single line containing
+               only the PID
+   - process_string: a substring that we expect to find in
+                     the command line of the daemon process
+
+  Returns:
+   - True if the daemon is judged to be alive (that is:
+      - the PID file exists, is readable and contains a number
+      - a process of the specified PID is running
+      - that process contains the specified string in its
+        command line
+      - the process is not in state Z (zombie))
+   - False otherwise
+
+  """
+  try:
+    pid_file = file(pid_file, 'r')
+    try:
+      pid = int(pid_file.readline())
+    finally:
+      pid_file.close()
+
+    cmdline_file_path = "/proc/%s/cmdline" % (pid)
+    cmdline_file = open(cmdline_file_path, 'r')
+    try:
+      cmdline = cmdline_file.readline()
+    finally:
+      cmdline_file.close()
+
+    if not process_string in cmdline:
+      return False
+
+    stat_file_path =  "/proc/%s/stat" % (pid)
+    stat_file = open(stat_file_path, 'r')
+    try:
+      process_state = stat_file.readline().split()[2]
+    finally:
+      stat_file.close()
+
+    if process_state == 'Z':
+      return False
+
+  except (IndexError, IOError, ValueError):
+    return False
+
+  return True
+
+
+def TryConvert(fn, val):
+  """Try to convert a value ignoring errors.
+
+  This function tries to apply function `fn` to `val`. If no
+  ValueError or TypeError exceptions are raised, it will return the
+  result, else it will return the original value. Any other exceptions
+  are propagated to the caller.
+
+  """
+  try:
+    nv = fn(val)
+  except (ValueError, TypeError), err:
+    nv = val
+  return nv
+
+
+def IsValidIP(ip):
+  """Verifies the syntax of an IP address.
+
+  This function checks if the ip address passes is valid or not based
+  on syntax (not ip range, class calculations or anything).
+
+  """
+  unit = "(0|[1-9]\d{0,2})"
+  return re.match("^%s\.%s\.%s\.%s$" % (unit, unit, unit, unit), ip)
+
+
+def IsValidShellParam(word):
+  """Verifies is the given word is safe from the shell's p.o.v.
+
+  This means that we can pass this to a command via the shell and be
+  sure that it doesn't alter the command line and is passed as such to
+  the actual command.
+
+  Note that we are overly restrictive here, in order to be on the safe
+  side.
+
+  """
+  return bool(re.match("^[-a-zA-Z0-9._+/:%@]+$", word))
+
+
+def BuildShellCmd(template, *args):
+  """Build a safe shell command line from the given arguments.
+
+  This function will check all arguments in the args list so that they
+  are valid shell parameters (i.e. they don't contain shell
+  metacharaters). If everything is ok, it will return the result of
+  template % args.
+
+  """
+  for word in args:
+    if not IsValidShellParam(word):
+      raise errors.ProgrammerError, ("Shell argument '%s' contains"
+                                     " invalid characters" % word)
+  return template % args
+
+
+def FormatUnit(value):
+  """Formats an incoming number of MiB with the appropriate unit.
+
+  Value needs to be passed as a numeric type. Return value is always a string.
+
+  """
+  if value < 1024:
+    return "%dM" % round(value, 0)
+
+  elif value < (1024 * 1024):
+    return "%0.1fG" % round(float(value) / 1024, 1)
+
+  else:
+    return "%0.1fT" % round(float(value) / 1024 / 1024, 1)
+
+
+def ParseUnit(input_string):
+  """Tries to extract number and scale from the given string.
+
+  Input must be in the format NUMBER+ [DOT NUMBER+] SPACE* [UNIT]. If no unit
+  is specified, it defaults to MiB. Return value is always an int in MiB.
+
+  """
+  m = re.match('^([.\d]+)\s*([a-zA-Z]+)?$', input_string)
+  if not m:
+    raise errors.UnitParseError, ("Invalid format")
+
+  value = float(m.groups()[0])
+
+  unit = m.groups()[1]
+  if unit:
+    lcunit = unit.lower()
+  else:
+    lcunit = 'm'
+
+  if lcunit in ('m', 'mb', 'mib'):
+    # Value already in MiB
+    pass
+
+  elif lcunit in ('g', 'gb', 'gib'):
+    value *= 1024
+
+  elif lcunit in ('t', 'tb', 'tib'):
+    value *= 1024 * 1024
+
+  else:
+    raise errors.UnitParseError, ("Unknown unit: %s" % unit)
+
+  # Make sure we round up
+  if int(value) < value:
+    value += 1
+
+  # Round up to the next multiple of 4
+  value = int(value)
+  if value % 4:
+    value += 4 - value % 4
+
+  return value
+
+
+def AddAuthorizedKey(file_name, key):
+  """Adds an SSH public key to an authorized_keys file.
+
+  Args:
+    file_name: Path to authorized_keys file
+    key: String containing key
+  """
+  key_fields = key.split()
+
+  f = open(file_name, 'a+')
+  try:
+    nl = True
+    for line in f:
+      # Ignore whitespace changes
+      if line.split() == key_fields:
+        break
+      nl = line.endswith('\n')
+    else:
+      if not nl:
+        f.write("\n")
+      f.write(key.rstrip('\r\n'))
+      f.write("\n")
+      f.flush()
+  finally:
+    f.close()
+
+
+def RemoveAuthorizedKey(file_name, key):
+  """Removes an SSH public key from an authorized_keys file.
+
+  Args:
+    file_name: Path to authorized_keys file
+    key: String containing key
+  """
+  key_fields = key.split()
+
+  fd, tmpname = tempfile.mkstemp(dir=os.path.dirname(file_name))
+  out = os.fdopen(fd, 'w')
+  try:
+    f = open(file_name, 'r')
+    try:
+      for line in f:
+        # Ignore whitespace changes while comparing lines
+        if line.split() != key_fields:
+          out.write(line)
+
+      out.flush()
+      os.rename(tmpname, file_name)
+    finally:
+      f.close()
+  finally:
+    out.close()
+
+
+def CreateBackup(file_name):
+  """Creates a backup of a file.
+
+  Returns: the path to the newly created backup file.
+
+  """
+  if not os.path.isfile(file_name):
+    raise errors.ProgrammerError, ("Can't make a backup of a non-file '%s'" %
+                                   file_name)
+
+  # Warning: the following code contains a race condition when we create more
+  # than one backup of the same file in a second.
+  backup_name = file_name + '.backup-%d' % int(time.time())
+  shutil.copyfile(file_name, backup_name)
+  return backup_name
+
+
+def ShellQuote(value):
+  """Quotes shell argument according to POSIX.
+  
+  """
+  if _re_shell_unquoted.match(value):
+    return value
+  else:
+    return "'%s'" % value.replace("'", "'\\''")
+
+
+def ShellQuoteArgs(args):
+  """Quotes all given shell arguments and concatenates using spaces.
+
+  """
+  return ' '.join([ShellQuote(i) for i in args])
diff --git a/man/Makefile.am b/man/Makefile.am
new file mode 100644
index 0000000000000000000000000000000000000000..0c6cdb7d654cc566c1cbaba260ee5e1819a5dbc0
--- /dev/null
+++ b/man/Makefile.am
@@ -0,0 +1,15 @@
+# Build man pages
+#
+
+man_MANS = ganeti.7 ganeti-os-interface.7 gnt-cluster.8 gnt-node.8 gnt-os.8 gnt-instance.8 ganeti-noded.8 ganeti-watcher.8
+EXTRA_DIST = ganeti-os-interface.sgml gnt-cluster.sgml gnt-node.sgml \
+             ganeti-watcher.sgml ganeti.sgml gnt-instance.sgml gnt-os.sgml ganeti-noded.sgml \
+	     footer.sgml $(man_MANS)
+
+%.8: %.sgml footer.sgml
+	docbook2man $<
+	rm -f manpage.links manpage.refs
+
+%.7: %.sgml footer.sgml
+	docbook2man $<
+	rm -f manpage.links manpage.refs
diff --git a/man/footer.sgml b/man/footer.sgml
new file mode 100644
index 0000000000000000000000000000000000000000..40075e7dd553975e2a977a82b2410b6bdee34deb
--- /dev/null
+++ b/man/footer.sgml
@@ -0,0 +1,77 @@
+  <refsect1>
+    <title>REPORTING BUGS</title>
+    <para>
+      Report bugs to http://code.google.com/p/ganeti/ or contact the
+      developers using the ganeti mailing list
+      &lt;ganeti@googlegroups.com&gt;.
+    </para>
+  </refsect1>
+
+  <refsect1>
+    <title>SEE ALSO</title>
+
+    <para>
+      Ganeti overview and specifications:
+      <citerefentry>
+        <refentrytitle>ganeti</refentrytitle>
+        <manvolnum>7</manvolnum>
+      </citerefentry> (general overview),
+      <citerefentry>
+        <refentrytitle>ganeti-os-interface</refentrytitle>
+        <manvolnum>7</manvolnum>
+      </citerefentry> (guest OS definitions).
+
+    </para>
+    <para>Ganeti commands:
+      <citerefentry>
+        <refentrytitle>gnt-cluster</refentrytitle>
+        <manvolnum>8</manvolnum>
+      </citerefentry> (cluster-wide commands),
+      <citerefentry>
+        <refentrytitle>gnt-node</refentrytitle>
+        <manvolnum>8</manvolnum>
+      </citerefentry> (node-related commands),
+      <citerefentry>
+        <refentrytitle>gnt-instance</refentrytitle>
+        <manvolnum>8</manvolnum>
+      </citerefentry> (instance commands),
+      <citerefentry>
+        <refentrytitle>gnt-os</refentrytitle>
+        <manvolnum>8</manvolnum>
+      </citerefentry> (guest OS commands).
+    <citerefentry>
+        <refentrytitle>gnt-backup</refentrytitle>
+        <manvolnum>8</manvolnum>
+      </citerefentry> (instance import/export commands).
+    </para>
+
+    <para>Ganeti daemons:
+      <citerefentry>
+        <refentrytitle>ganeti-watcher</refentrytitle>
+        <manvolnum>8</manvolnum>
+      </citerefentry> (automatic instance restarter),
+      <citerefentry>
+        <refentrytitle>ganeti-noded</refentrytitle>
+        <manvolnum>8</manvolnum>
+      </citerefentry> (node daemon).
+    </para>
+
+  </refsect1>
+
+  <refsect1>
+    <title>COPYRIGHT</title>
+
+    <para>
+      Copyright (C) 2006, 2007 Google Inc. Permission is granted to
+      copy, distribute and/or modify under the terms of the &gnu;
+      General Public License as published by the Free Software
+      Foundation; either version 2 of the License, or (at your option)
+      any later version.
+    </para>
+
+    <para>
+      On Debian systems, the complete text of the GNU General Public
+      License can be found in /usr/share/common-licenses/GPL.
+    </para>
+
+  </refsect1>
diff --git a/man/ganeti-noded.sgml b/man/ganeti-noded.sgml
new file mode 100644
index 0000000000000000000000000000000000000000..7f00e14cfa2481fe4d5c8fa3466fc5e326668409
--- /dev/null
+++ b/man/ganeti-noded.sgml
@@ -0,0 +1,100 @@
+<!doctype refentry PUBLIC "-//OASIS//DTD DocBook V4.1//EN" [
+
+  <!-- Please adjust the date whenever revising the manpage. -->
+  <!ENTITY dhdate      "<date>June 16, 2007</date>">
+  <!-- SECTION should be 1-8, maybe w/ subsection other parameters are
+       allowed: see man(7), man(1). -->
+  <!ENTITY dhsection   "<manvolnum>8</manvolnum>">
+  <!ENTITY dhucpackage "<refentrytitle>ganeti-noded</refentrytitle>">
+  <!ENTITY dhpackage   "ganeti-noded">
+
+  <!ENTITY debian      "<productname>Debian</productname>">
+  <!ENTITY gnu         "<acronym>GNU</acronym>">
+  <!ENTITY gpl         "&gnu; <acronym>GPL</acronym>">
+  <!ENTITY footer SYSTEM "footer.sgml">
+]>
+
+<refentry>
+  <refentryinfo>
+    <copyright>
+      <year>2006</year>
+      <year>2007</year>
+      <holder>Google Inc.</holder>
+    </copyright>
+    &dhdate;
+  </refentryinfo>
+  <refmeta>
+    &dhucpackage;
+
+    &dhsection;
+    <refmiscinfo>ganeti 1.2</refmiscinfo>
+  </refmeta>
+  <refnamediv>
+    <refname>&dhpackage;</refname>
+
+    <refpurpose>ganeti daemon</refpurpose>
+  </refnamediv>
+  <refsynopsisdiv>
+    <cmdsynopsis>
+      <command>&dhpackage; </command>
+      <arg>-f</arg>
+
+    </cmdsynopsis>
+  </refsynopsisdiv>
+  <refsect1>
+    <title>DESCRIPTION</title>
+
+    <para>
+      The <command>&dhpackage;</command> is the daemon which is
+      responsible for the cluster functions in the ganeti system.
+    </para>
+
+    <para>
+      For testing purposes, you can give the <option>-f</option>
+      option and the program won't detach from the running terminal.
+    </para>
+    <refsect2>
+      <title>ROLE</title>
+      <para>
+        The role of the node daemon is to do almost all the actions
+        that change the state of the node. Things like creating disks
+        for instances, activating disks, starting/stopping instance
+        and so on are done via the node daemon.
+      </para>
+
+      <para>
+        If the node daemon is stopped, the instances are not affected,
+        but the master won't be able to talk to that node.
+      </para>
+    </refsect2>
+
+    <refsect2>
+      <title>COMMUNICATION PROTOCOL</title>
+      <para>
+        Currently the master-node protocol is done using the Twisted
+        perspective broker libraries.
+      </para>
+    </refsect2>
+
+  </refsect1>
+
+  &footer;
+
+</refentry>
+
+<!-- Keep this comment at the end of the file
+Local variables:
+mode: sgml
+sgml-omittag:t
+sgml-shorttag:t
+sgml-minimize-attributes:nil
+sgml-always-quote-attributes:t
+sgml-indent-step:2
+sgml-indent-data:t
+sgml-parent-document:nil
+sgml-default-dtd-file:nil
+sgml-exposed-tags:nil
+sgml-local-catalogs:nil
+sgml-local-ecat-files:nil
+End:
+-->
diff --git a/man/ganeti-os-interface.sgml b/man/ganeti-os-interface.sgml
new file mode 100644
index 0000000000000000000000000000000000000000..f3f6717dd5a81a8488d05c5be992f4ee6824cff6
--- /dev/null
+++ b/man/ganeti-os-interface.sgml
@@ -0,0 +1,188 @@
+<!doctype refentry PUBLIC "-//OASIS//DTD DocBook V4.1//EN" [
+
+  <!-- Fill in your name for FIRSTNAME and SURNAME. -->
+  <!-- Please adjust the date whenever revising the manpage. -->
+  <!ENTITY dhdate      "<date>June 20, 2007</date>">
+  <!-- SECTION should be 1-8, maybe w/ subsection other parameters are
+       allowed: see man(7), man(1). -->
+  <!ENTITY dhsection   "<manvolnum>7</manvolnum>">
+  <!ENTITY dhucpackage "<refentrytitle>ganeti-os-interface</refentrytitle>">
+  <!ENTITY dhpackage   "ganeti">
+
+  <!ENTITY debian      "<productname>Debian</productname>">
+  <!ENTITY gnu         "<acronym>GNU</acronym>">
+  <!ENTITY gpl         "&gnu; <acronym>GPL</acronym>">
+  <!ENTITY footer SYSTEM "footer.sgml">
+]>
+
+<refentry>
+  <refentryinfo>
+    <copyright>
+      <year>2006</year>
+      <year>2007</year>
+      <holder>Google Inc.</holder>
+    </copyright>
+    &dhdate;
+  </refentryinfo>
+  <refmeta>
+    &dhucpackage;
+
+    &dhsection;
+    <refmiscinfo>ganeti 1.2</refmiscinfo>
+  </refmeta>
+  <refnamediv>
+    <refname>ganeti guest OS interface</refname>
+
+    <refpurpose>specifications for guest OS types
+    </refpurpose>
+
+  </refnamediv>
+
+  <refsect1>
+    <title>DESCRIPTION</title>
+
+    <para>
+      The method of supporting guest operating systems in Ganeti is to
+      have, for each guest OS type, a directory containing a number of
+      required files.
+    </para>
+
+
+  </refsect1>
+  <refsect1>
+    <title>REFERENCE</title>
+
+    <para>
+      There are four required files: <filename>create</filename>,
+      <filename>import</filename>, <filename>export</filename>
+      (executables) and <filename>ganeti_api_version</filename> (text
+      file).
+    </para>
+
+    <refsect2>
+      <title>create</title>
+      <cmdsynopsis>
+        <command>create</command>
+        <arg choice="req">-i <replaceable>instance_name</replaceable></arg>
+        <arg choice="req">-b <replaceable>blockdev_sda</replaceable></arg>
+        <arg choice="req">-s <replaceable>blockdev_sdb</replaceable></arg>
+      </cmdsynopsis>
+
+      <para>The <command>create</command> command is used for creating
+      a new instance from scratch.</para>
+
+      <para>The argument to the <option>-i</option> option is the FQDN
+      of the instance, which is guaranteed to resolve to an IP
+      address. The create script should configure the instance
+      according to this name. It can configure the IP statically or
+      not, depending on the deployment environment.</para>
+
+      <para>The <option>-b</option> and <option>-s</option> options
+      denote the block devices which will be visible in the instance
+      as <emphasis>sda</emphasis> and <emphasis>sdb</emphasis>. The
+      <emphasis>sda</emphasis> block device should be used for the
+      root disk (and will be passed as the root device for linux
+      kernels). The <emphasis>sdb</emphasis> device should be setup
+      for swap usage.</para>
+
+    </refsect2>
+
+    <refsect2>
+      <title>import</title>
+      <cmdsynopsis>
+        <command>import</command>
+        <arg choice="req">-i <replaceable>instance_name</replaceable></arg>
+        <arg choice="req">-b <replaceable>blockdev_sda</replaceable></arg>
+        <arg choice="req">-s <replaceable>blockdev_sdb</replaceable></arg>
+      </cmdsynopsis>
+
+      <para>
+        The <command>import</command> command is used for restoring an
+        instance from a backup as done by
+        <command>export</command>. The arguments are the same as for
+        <command>create</command> and the output of the
+        <command>export</command> will be provided on
+        <acronym>stdin</acronym>.
+      </para>
+
+    </refsect2>
+
+    <refsect2>
+      <title>export</title>
+      <cmdsynopsis>
+        <command>export</command>
+        <arg choice="req">-i <replaceable>instance_name</replaceable></arg>
+        <arg choice="req">-b <replaceable>blockdev_sda</replaceable></arg>
+      </cmdsynopsis>
+
+      <para>
+        This command is used in order to make a backup of the
+        instance. The command should write to stdout a dump of the
+        given block device. The output of this program will be passed
+        to the <command>import</command> command.
+      </para>
+
+      <para>
+        The options have the same meaning as for
+        <command>create</command> and <command>import</command>, with
+        the exception that the argument to <option>-i</option> denotes
+        an existing instance.
+      </para>
+
+    </refsect2>
+
+    <refsect2>
+      <title>ganeti_api_version</title>
+      <para>
+        The <filename>ganeti_api_version</filename> file is a plain
+        text file containing the version of the guest OS api that this
+        OS definition complies with. The version documented by this
+        man page is 4, so this file must contain 4 followed by a
+        newline.
+      </para>
+    </refsect2>
+
+  </refsect1>
+
+  <refsect1>
+    <title>NOTES</title>
+
+    <refsect2>
+      <title>Common behaviour</title>
+
+      <para>All the scripts should display an usage message when called with a wrong number of arguments or when the first argument is <option>-h</option> or <option>--help</option>.</para>
+
+    </refsect2>
+
+    <!--
+    <refsect2>
+
+      <title>Export/import format</title>
+
+      <para>It is up to the export and import scripts to define the format they use. It is only required for these two to work together. It is not recommended that </para>
+
+    </refsect2>
+    -->
+
+  </refsect1>
+
+  &footer;
+
+</refentry>
+
+<!-- Keep this comment at the end of the file
+Local variables:
+mode: sgml
+sgml-omittag:t
+sgml-shorttag:t
+sgml-minimize-attributes:nil
+sgml-always-quote-attributes:t
+sgml-indent-step:2
+sgml-indent-data:t
+sgml-parent-document:nil
+sgml-default-dtd-file:nil
+sgml-exposed-tags:nil
+sgml-local-catalogs:nil
+sgml-local-ecat-files:nil
+End:
+-->
diff --git a/man/ganeti-watcher.sgml b/man/ganeti-watcher.sgml
new file mode 100644
index 0000000000000000000000000000000000000000..e5973e7e7afab6f04f76c8da0d03f629aaa50ddb
--- /dev/null
+++ b/man/ganeti-watcher.sgml
@@ -0,0 +1,108 @@
+<!doctype refentry PUBLIC "-//OASIS//DTD DocBook V4.1//EN" [
+
+  <!-- Fill in your name for FIRSTNAME and SURNAME. -->
+  <!-- Please adjust the date whenever revising the manpage. -->
+  <!ENTITY dhdate      "<date>June 20, 2007</date>">
+  <!-- SECTION should be 1-8, maybe w/ subsection other parameters are
+       allowed: see man(7), man(1). -->
+  <!ENTITY dhsection   "<manvolnum>8</manvolnum>">
+  <!ENTITY dhucpackage "<refentrytitle>ganeti-watcher</refentrytitle>">
+  <!ENTITY dhpackage   "ganeti-watcher">
+
+  <!ENTITY debian      "<productname>Debian</productname>">
+  <!ENTITY gnu         "<acronym>GNU</acronym>">
+  <!ENTITY gpl         "&gnu; <acronym>GPL</acronym>">
+  <!ENTITY footer SYSTEM "footer.sgml">
+]>
+
+<refentry>
+  <refentryinfo>
+    <copyright>
+      <year>2007</year>
+      <holder>Google Inc.</holder>
+    </copyright>
+    &dhdate;
+  </refentryinfo>
+  <refmeta>
+    &dhucpackage;
+
+    &dhsection;
+    <refmiscinfo>ganeti 1.2</refmiscinfo>
+  </refmeta>
+  <refnamediv>
+    <refname>&dhpackage;</refname>
+
+    <refpurpose>ganeti cluster watcher</refpurpose>
+  </refnamediv>
+  <refsynopsisdiv>
+    <cmdsynopsis>
+      <command>&dhpackage; </command>
+
+    </cmdsynopsis>
+  </refsynopsisdiv>
+  <refsect1>
+    <title>DESCRIPTION</title>
+
+    <para>
+      The <command>&dhpackage;</command> is a periodically run script
+      which is responsible for keeping the instances in the correct
+      status.
+    </para>
+
+    <para>
+      Its function is to try to keep running all instances which are
+      marked as <emphasis>up</emphasis> in the configuration file, by
+      trying to start them a limited number of times.
+    </para>
+
+    <para>In order to prevent piling up commands, all the
+    <emphasis>gnt-*</emphasis> commands executed by ganeti-watcher are
+    run with a timeout of 15 seconds.
+    </para>
+
+    <para>
+      The command has a state file located at
+      <filename>/var/lib/ganeti/restart_state</filename> and a log
+      file at
+      <filename>/var/log/ganeti/watcher.log</filename>. Removal of
+      either file will not affect correct operation; the removal of
+      the state file will just cause the restart counters for the
+      instances to reset to zero.
+    </para>
+
+  </refsect1>
+
+  <refsect1>
+    <title>KNOWN BUGS</title>
+
+    <para>
+      Due to the way we initialize DRBD peers, restarting a secondary
+      node for an instance will cause the DRBD endpoints on that node
+      to disappear, thus all instances which have that node as a
+      secondary will lose redundancy. The watcher does not detect this
+      situation. The workaround is to manually run
+      <command>gnt-instance activate-disks</command> for all the
+      affected instances.
+    </para>
+  </refsect1>
+
+  &footer;
+
+</refentry>
+
+<!-- Keep this comment at the end of the file
+Local variables:
+mode: sgml
+sgml-omittag:t
+sgml-shorttag:t
+sgml-minimize-attributes:nil
+sgml-always-quote-attributes:t
+sgml-indent-step:2
+sgml-indent-data:t
+sgml-parent-document:nil
+sgml-default-dtd-file:nil
+sgml-exposed-tags:nil
+sgml-local-catalogs:nil
+sgml-local-ecat-files:nil
+End:
+-->
diff --git a/man/ganeti.sgml b/man/ganeti.sgml
new file mode 100644
index 0000000000000000000000000000000000000000..9bbd07d2fd7abec725ef7340a929ea4edf99a32f
--- /dev/null
+++ b/man/ganeti.sgml
@@ -0,0 +1,93 @@
+<!doctype refentry PUBLIC "-//OASIS//DTD DocBook V4.1//EN" [
+
+  <!-- Fill in your name for FIRSTNAME and SURNAME. -->
+  <!-- Please adjust the date whenever revising the manpage. -->
+  <!ENTITY dhdate      "<date>June 16, 2007</date>">
+  <!-- SECTION should be 1-8, maybe w/ subsection other parameters are
+       allowed: see man(7), man(1). -->
+  <!ENTITY dhsection   "<manvolnum>7</manvolnum>">
+  <!ENTITY dhucpackage "<refentrytitle>ganeti</refentrytitle>">
+  <!ENTITY dhpackage   "ganeti">
+
+  <!ENTITY debian      "<productname>Debian</productname>">
+  <!ENTITY gnu         "<acronym>GNU</acronym>">
+  <!ENTITY gpl         "&gnu; <acronym>GPL</acronym>">
+  <!ENTITY footer SYSTEM "footer.sgml">
+]>
+
+<refentry>
+  <refentryinfo>
+    <copyright>
+      <year>2006</year>
+      <year>2007</year>
+      <holder>Google Inc.</holder>
+    </copyright>
+    &dhdate;
+  </refentryinfo>
+  <refmeta>
+    &dhucpackage;
+
+    &dhsection;
+    <refmiscinfo>ganeti 1.2</refmiscinfo>
+  </refmeta>
+  <refnamediv>
+    <refname>&dhpackage;</refname>
+
+    <refpurpose>cluster-based virtualization management</refpurpose>
+
+  </refnamediv>
+  <refsynopsisdiv>
+    <screen>
+# gnt-cluster init cluster1.example.com
+# gnt-node add node2.example.com
+# gnt-os add -o debian-etch -p /srv/ganeti/os/debian-etch
+# gnt-instance add -n node2.example.com -o debian-etch -s 128 -m 8 \
+&gt; -t plain instance1.example.com
+    </screen>
+  </refsynopsisdiv>
+  <refsect1>
+    <title>DESCRIPTION</title>
+
+    <para>
+      The ganeti software manages physical nodes and virtual instances
+      of a cluster based on a virtualization software. The current
+      version (1.2) supports Xen 3.0.
+    </para>
+
+  </refsect1>
+  <refsect1>
+    <title>Quick start</title>
+
+    <para>
+      First you must install the software on all the cluster nodes,
+      either from sources or (if available) from a package. The next
+      step is to create the initial cluster configuration, using
+      <computeroutput>gnt-cluster init</computeroutput>.
+    </para>
+
+    <para>
+      Then you can add other nodes, or start creating instances.
+    </para>
+
+  </refsect1>
+
+  &footer;
+
+</refentry>
+
+<!-- Keep this comment at the end of the file
+Local variables:
+mode: sgml
+sgml-omittag:t
+sgml-shorttag:t
+sgml-minimize-attributes:nil
+sgml-always-quote-attributes:t
+sgml-indent-step:2
+sgml-indent-data:t
+sgml-parent-document:nil
+sgml-default-dtd-file:nil
+sgml-exposed-tags:nil
+sgml-local-catalogs:nil
+sgml-local-ecat-files:nil
+End:
+-->
diff --git a/man/gnt-cluster.sgml b/man/gnt-cluster.sgml
new file mode 100644
index 0000000000000000000000000000000000000000..b24d86a57934ff230f816c014ea999f3c82dccba
--- /dev/null
+++ b/man/gnt-cluster.sgml
@@ -0,0 +1,222 @@
+<!doctype refentry PUBLIC "-//OASIS//DTD DocBook V4.1//EN" [
+
+  <!-- Fill in your name for FIRSTNAME and SURNAME. -->
+  <!-- Please adjust the date whenever revising the manpage. -->
+  <!ENTITY dhdate      "<date>June 20, 2007</date>">
+  <!-- SECTION should be 1-8, maybe w/ subsection other parameters are
+       allowed: see man(7), man(1). -->
+  <!ENTITY dhsection   "<manvolnum>8</manvolnum>">
+  <!ENTITY dhucpackage "<refentrytitle>gnt-cluster</refentrytitle>">
+  <!ENTITY dhpackage   "gnt-cluster">
+
+  <!ENTITY debian      "<productname>Debian</productname>">
+  <!ENTITY gnu         "<acronym>GNU</acronym>">
+  <!ENTITY gpl         "&gnu; <acronym>GPL</acronym>">
+  <!ENTITY footer SYSTEM "footer.sgml">
+]>
+
+<refentry>
+  <refentryinfo>
+    <copyright>
+      <year>2006</year>
+      <year>2007</year>
+      <holder>Google Inc.</holder>
+    </copyright>
+    &dhdate;
+  </refentryinfo>
+  <refmeta>
+    &dhucpackage;
+
+    &dhsection;
+    <refmiscinfo>ganeti 1.2</refmiscinfo>
+  </refmeta>
+  <refnamediv>
+    <refname>&dhpackage;</refname>
+
+    <refpurpose>ganeti administration, cluster-wide</refpurpose>
+  </refnamediv>
+  <refsynopsisdiv>
+    <cmdsynopsis>
+      <command>&dhpackage; </command>
+
+      <arg choice="req">command</arg>
+      <arg>arguments...</arg>
+    </cmdsynopsis>
+  </refsynopsisdiv>
+  <refsect1>
+    <title>DESCRIPTION</title>
+
+    <para>
+      The <command>&dhpackage;</command> is used for cluster-wide
+      administration in the ganeti system.
+    </para>
+
+  </refsect1>
+  <refsect1>
+    <title>COMMANDS</title>
+
+    <cmdsynopsis>
+      <command>command</command>
+      <arg>-n <replaceable>node</replaceable></arg>
+      <arg choice="req"><replaceable>command</replaceable></arg>
+    </cmdsynopsis>
+
+    <para>
+      Executes a command on all nodes. If the option
+      <option>-n</option> is not given, the command will be executed
+      on all nodes, otherwise it will be executed only on the node(s)
+      specified. Use the option multiple times for running it on
+      multiple nodes, like:
+
+      <screen>
+        # gnt-cluster command -n node1.example.com -n node2.example.com date
+      </screen>
+
+    </para>
+
+    <para>The command is constructed by concatenating all other
+    command line arguments. For example, to list the contents of the
+    <filename class="directory">/etc</filename> directory on all
+    nodes, run:
+
+      <screen>
+        # gnt-cluster command ls -l /etc
+      </screen>
+
+      and the command which will be executed will be
+      <computeroutput>"ls -l /etc"</computeroutput>
+    </para>
+
+
+    <cmdsynopsis>
+      <command>copyfile</command>
+      <arg>-n <replaceable>node</replaceable></arg>
+      <arg choice="req"><replaceable>file</replaceable></arg>
+    </cmdsynopsis>
+
+    <para>
+      Copies a file to all or to some nodes. The argument specifies
+      the source file (on the current system), the <option>-n</option>
+      argument specifies the target node, or nodes if the option is
+      given multiple times. If <option>-n</option> is not given at
+      all, the file will be copied to all nodes.
+
+      Example:
+      <screen>
+        # gnt-cluster -n node1.example.com -n node2.example.com copyfile /tmp/test
+      </screen>
+
+      This will copy the file <filename>/tmp/test</filename> from the
+      current node to the two named nodes.
+    </para>
+
+    <cmdsynopsis>
+      <command>getmaster</command>
+    </cmdsynopsis>
+
+    <para>
+      Displays the current master node.
+    </para>
+
+    <cmdsynopsis>
+      <command>info</command>
+    </cmdsynopsis>
+
+    <para>
+      Shows runtime cluster information: cluster name, architecture
+      (32 or 64 bit), master node, node list and instance list.
+    </para>
+
+    <cmdsynopsis>
+      <command>init</command>
+      <arg>-s <replaceable>secondary_ip</replaceable></arg>
+      <arg choice="req"><replaceable>clustername</replaceable></arg>
+    </cmdsynopsis>
+    <para>
+      This commands is only run once initially on the first node of
+      the cluster. It will initialize the cluster configuration and
+      setup ssh-keys and more.
+    </para>
+
+    <para>
+      Note that the <replaceable>clustername</replaceable> is not any
+      random name. It has to be resolvable to an IP address using DNS,
+      and it is best if you give the fully-qualified domain name.
+    </para>
+
+    <para>
+      The cluster can run in two modes: single-home or dual-homed. In
+      the first case, all traffic (both public traffic, inter-node
+      traffic and data replication traffic) goes over the same
+      interface. In the dual-homed case, the data replication traffic
+      goes over the second network. The <option>-s</option> option
+      here marks the cluster as dual-homed and its parameter
+      represents this node's address on the second network. If you
+      initialise the cluster with <option>-s</option>, all nodes added
+      must have a secondary IP as well.
+    </para>
+
+    <para>
+      Note that for Ganeti it doesn't matter if the secondary network
+      is actually a separate physical network, or is done using
+      tunneling, etc. For performance reasons, it's recommended to use
+      a separate network, of course.
+    </para>
+
+    <cmdsynopsis>
+      <command>masterfailover</command>
+    </cmdsynopsis>
+
+    <para>
+      Failover the master role to the current node.
+    </para>
+
+    <cmdsynopsis>
+      <command>destroy</command>
+    </cmdsynopsis>
+
+    <para>
+      Remove all configuration files related to the cluster, so that a
+      <command>gnt-cluster init</command> can be done again afterwards.
+    </para>
+
+    <cmdsynopsis>
+      <command>verify</command>
+    </cmdsynopsis>
+
+    <para>
+      Verify correctness of cluster configuration. This is safe with
+      respect to running instances, and incurs no downtime of the
+      instances.
+    </para>
+
+    <cmdsynopsis>
+      <command>version</command>
+    </cmdsynopsis>
+
+    <para>
+      Show the cluster version.
+    </para>
+
+  </refsect1>
+
+  &footer;
+
+</refentry>
+
+<!-- Keep this comment at the end of the file
+Local variables:
+mode: sgml
+sgml-omittag:t
+sgml-shorttag:t
+sgml-minimize-attributes:nil
+sgml-always-quote-attributes:t
+sgml-indent-step:2
+sgml-indent-data:t
+sgml-parent-document:nil
+sgml-default-dtd-file:nil
+sgml-exposed-tags:nil
+sgml-local-catalogs:nil
+sgml-local-ecat-files:nil
+End:
+-->
diff --git a/man/gnt-instance.sgml b/man/gnt-instance.sgml
new file mode 100644
index 0000000000000000000000000000000000000000..9e547f15969cc8ad6b014b99be3ef3a217d5cbdf
--- /dev/null
+++ b/man/gnt-instance.sgml
@@ -0,0 +1,638 @@
+<!doctype refentry PUBLIC "-//OASIS//DTD DocBook V4.1//EN" [
+
+  <!-- Fill in your name for FIRSTNAME and SURNAME. -->
+  <!-- Please adjust the date whenever revising the manpage. -->
+  <!ENTITY dhdate      "<date>May 16, 2007</date>">
+  <!-- SECTION should be 1-8, maybe w/ subsection other parameters are
+       allowed: see man(7), man(1). -->
+  <!ENTITY dhsection   "<manvolnum>8</manvolnum>">
+  <!ENTITY dhucpackage "<refentrytitle>gnt-instance</refentrytitle>">
+  <!ENTITY dhpackage   "gnt-instance">
+
+  <!ENTITY debian      "<productname>Debian</productname>">
+  <!ENTITY gnu         "<acronym>GNU</acronym>">
+  <!ENTITY gpl         "&gnu; <acronym>GPL</acronym>">
+  <!ENTITY footer SYSTEM "footer.sgml">
+]>
+
+<refentry>
+  <refentryinfo>
+    <copyright>
+      <year>2006</year>
+      <year>2007</year>
+      <holder>Google Inc.</holder>
+    </copyright>
+    &dhdate;
+  </refentryinfo>
+  <refmeta>
+    &dhucpackage;
+
+    &dhsection;
+    <refmiscinfo>ganeti 1.2</refmiscinfo>
+  </refmeta>
+  <refnamediv>
+    <refname>&dhpackage;</refname>
+
+    <refpurpose>ganeti instance administration</refpurpose>
+  </refnamediv>
+  <refsynopsisdiv>
+    <cmdsynopsis>
+      <command>&dhpackage; </command>
+
+      <arg choice="req">command</arg>
+      <arg>arguments...</arg>
+    </cmdsynopsis>
+  </refsynopsisdiv>
+  <refsect1>
+    <title>DESCRIPTION</title>
+
+    <para>
+      The <command>&dhpackage;</command> is used for instance
+      administration in the ganeti system.
+    </para>
+
+  </refsect1>
+  <refsect1>
+    <title>COMMANDS</title>
+
+    <refsect2>
+      <title>Creation/removal/querying</title>
+
+      <refsect3>
+        <title>ADD</title>
+        <cmdsynopsis>
+          <command>add</command>
+          <arg choice="req">-n <replaceable>node</replaceable></arg>
+          <arg>-s <replaceable>disksize</replaceable></arg>
+          <arg>-o <replaceable>os-type</replaceable></arg>
+          <arg>-m <replaceable>memsize</replaceable></arg>
+          <arg>-b <replaceable>bridge</replaceable></arg>
+          <sbr>
+          <arg choice="req">-t<group>
+              <arg>diskless</arg>
+              <arg>plain</arg>
+              <arg>local_raid1</arg>
+              <arg>remote_raid1</arg>
+            </group>
+          </arg>
+          <arg choice="req"><replaceable>instance</replaceable></arg>
+        </cmdsynopsis>
+        <para>
+          Creates a new instance on the specified
+          host. <replaceable>instance</replaceable> must be in DNS and
+          resolve to a IP in the same network as the nodes in the
+          cluster.
+        </para>
+
+        <para>
+          The <option>-s</option> option specifies the disk size for
+          the instance, in gigibytes (defaults to 20 GiB).
+        </para>
+
+        <para>
+          The <option>-o</option> options specifies the operating
+          system to be installed. The available operating systems can
+          be listed with <command>gnt-os list</command>.
+        </para>
+
+        <para>
+          The <option>-m</option> option specifies the memory size for
+          the instance, in megibytes (defaults to 128 MiB).
+        </para>
+
+        <para>
+          The <option>-b</option> option specifies the bridge to which the
+          instance will be connected. (defaults to the cluster-wide default
+          bridge specified at cluster intialization time).
+        </para>
+
+        <para>
+          The <option>-t</option> options specifies the disk layout type for
+          the instance. The available choices are:
+          <variablelist>
+            <varlistentry>
+              <term>diskless</term>
+              <listitem>
+                <para>
+                  This creates an instance with no disks. Its useful for
+                  testing only (or other special cases).
+                </para>
+              </listitem>
+            </varlistentry>
+            <varlistentry>
+              <term>plain</term>
+              <listitem>
+                <para>Disk devices will be logical volumes.</para>
+              </listitem>
+            </varlistentry>
+            <varlistentry>
+              <term>local_raid1</term>
+              <listitem>
+                <para>
+                  Disk devices will be md raid1 arrays over two local
+                  logical volumes.
+                </para>
+              </listitem>
+            </varlistentry>
+            <varlistentry>
+              <term>remote_raid1</term>
+              <listitem>
+                <para>
+                  Disk devices will be md raid1 arrays with one
+                  component (so it's not actually raid1): a drbd device
+                  between the instance's primary node and the node given
+                  by the option <option>--secondary-node</option>.
+                </para>
+              </listitem>
+            </varlistentry>
+          </variablelist>
+        </para>
+
+        <para>
+          The <option>--secondary-node</option> option is used with
+          the remote raid disk template type and specifies the remote
+          node.
+        </para>
+
+        <para>
+          If you do not want gnt-instance to wait for the disk mirror
+          to be synced, use the <option>--no-wait-for-sync</option>
+          option.
+        </para>
+
+
+        <para>
+          Example:
+          <screen>
+# gnt-instance add -t plain -s 30 -m 512 -n node1.example.com \
+> instance1.example.com
+# gnt-instance add -t remote_raid1 --secondary-node node3.example.com \
+> -s 30 -m 512 -n node1.example.com instance2.example.com
+          </screen>
+        </para>
+
+      </refsect3>
+
+      <refsect3>
+        <title>REMOVE</title>
+
+        <cmdsynopsis>
+          <command>remove</command>
+          <arg choice="req"><replaceable>instance</replaceable></arg>
+        </cmdsynopsis>
+
+        <para>
+          Remove an instance. This will remove all data from the
+          instance and there is <emphasis>no way back</emphasis>. If
+          you are not sure if you use an instance again, use
+          <command>shutdown</command> first and leave it in the
+          shutdown state for a while.
+        </para>
+
+        <para>
+          Example:
+          <screen>
+# gnt-instance remove instance1.example.com
+          </screen>
+        </para>
+      </refsect3>
+
+      <refsect3>
+        <title>LIST</title>
+
+        <cmdsynopsis>
+          <command>list</command>
+          <arg>--no-headers</arg>
+          <arg>--separator=<replaceable>SEPARATOR</replaceable></arg>
+          <arg>-o <replaceable>FIELD,...</replaceable></arg>
+        </cmdsynopsis>
+
+        <para>
+          Shows the currently configured instances with memory usage,
+          disk usage, the node they are running on, and the CPU time,
+          counted in seconds, used by each instance since its latest
+          restart.
+        </para>
+
+        <para>
+          The <option>--no-headers</option> option will skip the
+          initial header line. The <option>--separator</option> option
+          takes an argument which denotes what will be used between
+          the output fields. Both these options are to help scripting.
+        </para>
+
+        <para>
+          The <option>-o</option> option takes a comma-separated list
+          of output fields. The available fields and their meaning
+          are:
+          <variablelist>
+            <varlistentry>
+              <term>name</term>
+              <listitem>
+                <simpara>the instance name</simpara>
+              </listitem>
+            </varlistentry>
+            <varlistentry>
+              <term>os</term>
+              <listitem>
+                <simpara>the OS of the instance</simpara>
+              </listitem>
+            </varlistentry>
+            <varlistentry>
+              <term>pnode</term>
+              <listitem>
+                <simpara>the primary node of the instance</simpara>
+              </listitem>
+            </varlistentry>
+            <varlistentry>
+              <term>snodes</term>
+              <listitem>
+                <simpara>comma-separated list of secondary-nodes for the
+                  instance; usually this will be just one node</simpara>
+              </listitem>
+            </varlistentry>
+            <varlistentry>
+              <term>admin_state</term>
+              <listitem>
+                <simpara>the desired state of the instance (either "yes"
+                  or "no" denoting the instance should run or
+                  not)</simpara>
+              </listitem>
+            </varlistentry>
+            <varlistentry>
+              <term>admin_ram</term>
+              <listitem>
+                <simpara>the desired memory for the instance</simpara>
+              </listitem>
+            </varlistentry>
+            <varlistentry>
+              <term>disk_template</term>
+              <listitem>
+                <simpara>the disk template of the instance</simpara>
+              </listitem>
+            </varlistentry>
+            <varlistentry>
+              <term>oper_state</term>
+              <listitem>
+                <simpara>the actual state of the instance; can take of
+                  the values "running", "stopped", "(node down)"</simpara>
+              </listitem>
+            </varlistentry>
+            <varlistentry>
+              <term>oper_ram</term>
+              <listitem>
+                <simpara>the actual memory usage of the instance as seen
+                  by the hypervisor</simpara>
+              </listitem>
+            </varlistentry>
+            <varlistentry>
+              <term>ip</term>
+              <listitem>
+                <simpara>the ip address ganeti recognizes as associated with
+                the instance interface</simpara>
+              </listitem>
+            </varlistentry>
+            <varlistentry>
+              <term>mac</term>
+              <listitem>
+                <simpara>the instance interface MAC address</simpara>
+              </listitem>
+            </varlistentry>
+            <varlistentry>
+              <term>bridge</term>
+              <listitem>
+                <simpara>bridge the instance is connected to
+                </simpara>
+              </listitem>
+            </varlistentry>
+          </variablelist>
+        </para>
+
+        <para>
+          There is a subtle grouping about the available output
+          fields: all fields except for <option>oper_state</option>
+          and <option>oper_ram</option> are configuration value and
+          not run-time values. So if you don't select any of the
+          <option>oper_*</option> fields, the query will be satisfied
+          instantly from the cluster configuration, without having to
+          ask the remote nodes for the data. This can be helpful for
+          big clusters when you only want some data and it makes sense
+          to specify a reduced set of output fields.
+        </para>
+
+        <para>The default output field list is:
+          <simplelist type="inline">
+            <member>name</member>
+            <member>os</member>
+            <member>pnode</member>
+            <member>admin_state</member>
+            <member>oper_state</member>
+            <member>oper_ram</member>
+          </simplelist>.
+        </para>
+      </refsect3>
+
+      <refsect3>
+        <title>INFO</title>
+
+        <cmdsynopsis>
+          <command>info</command>
+          <arg rep="repeat"><replaceable>instance</replaceable></arg>
+        </cmdsynopsis>
+
+        <para>
+          Show detailed information about the (given) instances. This
+          is different from <command>list</command> as it shows
+          detailed data about the instance's disks (especially useful
+          for remote raid templates).
+        </para>
+      </refsect3>
+
+      <refsect3>
+        <title>MODIFY</title>
+
+        <cmdsynopsis>
+          <command>modify</command>
+          <arg choice="opt">-m <replaceable>memsize</replaceable></arg>
+          <arg choice="opt">-p <replaceable>vcpus</replaceable></arg>
+          <arg choice="opt">-i <replaceable>ip</replaceable></arg>
+          <arg choice="opt">-b <replaceable>bridge</replaceable></arg>
+          <arg choice="req"><replaceable>instance</replaceable></arg>
+        </cmdsynopsis>
+
+        <para>
+          Modify the memory size, number of vcpus, ip address and/or bridge
+          for an instance.
+        </para>
+
+        <para>
+          The memory size is given in MiB. Note that you need to give
+          at least one of the arguments, otherwise the command
+          complains.
+        </para>
+
+        <para>
+          All the changes take effect at the next restart. If the
+          instance is running, there is no effect on the instance.
+        </para>
+      </refsect3>
+
+    </refsect2>
+
+    <refsect2>
+      <title>Starting/stopping/connecting to console</title>
+
+      <refsect3>
+        <title>STARTUP</title>
+
+        <cmdsynopsis>
+          <command>startup</command>
+          <arg>--extra=<replaceable>PARAMS</replaceable></arg>
+          <arg choice="req"><replaceable>instance</replaceable></arg>
+        </cmdsynopsis>
+
+        <para>
+          Starts an instance. The node where to start the instance is
+          taken from the configuration.
+        </para>
+
+        <para>
+          The <option>--extra</option> option is used to pass
+          additional argument to the instance's kernel for this start
+          only. Currently there is no way to specify a persistent set
+          of arguments (beside the one hardcoded). Note that this may
+          not apply to all virtualization types.
+        </para>
+
+
+        <para>
+          Example:
+          <screen>
+# gnt-instance start instance1.example.com
+# gnt-instance start --extra single test1.example.com
+          </screen>
+        </para>
+      </refsect3>
+
+      <refsect3>
+        <title>SHUTDOWN</title>
+
+        <cmdsynopsis>
+          <command>shutdown</command>
+          <arg choice="req"><replaceable>instance</replaceable></arg>
+        </cmdsynopsis>
+
+        <para>
+          Stops the instance. If the instance cannot be cleanly
+          stopped during a hardcoded interval (currently 2 minutes),
+          it will forcibly stop the instance (equivalent to switching
+          off the power on a physical machine).
+        </para>
+
+        <para>
+          Example:
+          <screen>
+# gnt-instance shutdown instance1.example.com
+          </screen>
+        </para>
+      </refsect3>
+
+      <refsect3>
+        <title>CONSOLE</title>
+        <cmdsynopsis>
+          <command>console</command>
+          <arg choice="req"><replaceable>instance</replaceable></arg>
+        </cmdsynopsis>
+
+        <para>
+          Connects to the console of the given instance. If the instance
+          is not up, an error is returned.
+        </para>
+
+        <para>
+          Example:
+          <screen>
+# gnt-instance console instance1.example.com
+          </screen>
+        </para>
+      </refsect3>
+
+    </refsect2>
+
+    <refsect2>
+      <title>Disk management</title>
+
+      <refsect3>
+        <title>REPLACE-DISKS</title>
+
+        <cmdsynopsis>
+          <command>replace-disks</command>
+          <arg choice="req">--new-secondary <replaceable>NODE</replaceable></arg>
+          <arg choice="req"><replaceable>instance</replaceable></arg>
+        </cmdsynopsis>
+
+        <para>
+          This command does a full add and replace for both disks of
+          an instance.  It basically does an
+          <command>addmirror</command> and
+          <command>removemirror</command> for both disks of the
+          instance.
+        </para>
+
+        <para>
+          If you also want to replace the secondary node during this
+          process (for example to fix a broken secondary node), you
+          can do so using the <option>--new-secondary</option> option.
+        </para>
+      </refsect3>
+
+      <refsect3>
+        <title>ADD-MIRROR</title>
+        <cmdsynopsis>
+          <command>add-mirror</command>
+          <arg choice="req">-b <replaceable>sdX</replaceable></arg>
+          <arg choice="req">-n <replaceable>node</replaceable></arg>
+          <arg choice="req"><replaceable>instance</replaceable></arg>
+        </cmdsynopsis>
+        <para>
+          Adds a new mirror to the disk layout of the instance, if the
+          instance has a remote raid disk layout.
+
+          The new mirror member will be between the instance's primary
+          node and the node given with the <option>-n</option> option.
+        </para>
+      </refsect3>
+
+      <refsect3>
+        <title>REMOVE-MIRROR</title>
+
+        <cmdsynopsis>
+          <command>removemirror</command>
+          <arg choice="req">-b <replaceable>sdX</replaceable></arg>
+          <arg choice="req">-p <replaceable>id</replaceable></arg>
+          <arg choice="req"><replaceable>instance</replaceable></arg>
+        </cmdsynopsis>
+        <para>
+          Removes a mirror componenent from the disk layout of the
+          instance, if the instance has a remote raid disk layout.
+        </para>
+
+        <para>
+          You need to specifiy on which disk to act on using the
+          <option>-b</option> option (either <filename>sda</filename>
+          or <filename>sdb</filename>) and the mirror component, which
+          is identified by the <option>-p</option> option. You can
+          find the list of valid identifiers with the
+          <command>info</command> command.
+        </para>
+
+      <refsect3>
+        <title>ACTIVATE-DISKS</title>
+
+        <cmdsynopsis>
+          <command>activate-disks</command>
+          <arg choice="req"><replaceable>instance</replaceable></arg>
+        </cmdsynopsis>
+        <para>
+          Activates the block devices of the given instance. If
+          successful, the command will show the location and name of
+          the block devices:
+          <screen>
+node1.example.com:sda:/dev/md0
+node1.example.com:sdb:/dev/md1
+          </screen>
+
+          In this example, <emphasis>node1.example.com</emphasis> is
+          the name of the node on which the devices have been
+          activated. The <emphasis>sda</emphasis> and
+          <emphasis>sdb</emphasis> are the names of the block devices
+          inside the instance. <emphasis>/dev/md0</emphasis> and
+          <emphasis>/dev/md1</emphasis> are the names of the block
+          devices as visible on the node.
+        </para>
+
+        <para>
+          Note that it is safe to run this command while the instance
+          is already running.
+        </para>
+      </refsect3>
+
+      <refsect3>
+        <title>DEACTIVATE-DISKS</title>
+
+        <cmdsynopsis>
+          <command>deactivate-disks</command>
+          <arg choice="req"><replaceable>instance</replaceable></arg>
+        </cmdsynopsis>
+        <para>
+          De-activates the block devices of the given instance. Note
+          that if you run this command for a remote raid instance
+          type, while it is running, it will not be able to shutdown
+          the block devices on the primary node, but it will shutdown
+          the block devices on the secondary nodes, thus breaking the
+          replication.
+        </para>
+
+      </refsect3>
+
+    </refsect2>
+
+    <refsect2>
+      <title>Recovery</title>
+
+      <refsect3>
+        <title>FAILOVER</title>
+
+        <cmdsynopsis>
+          <command>failover</command>
+          <arg>-f</arg>
+          <arg>--ignore-consistency</arg>
+          <arg choice="req"><replaceable>instance</replaceable></arg>
+        </cmdsynopsis>
+
+        <para>
+          Failover will fail the instance over its secondary
+          node. This works only for instances having a remote raid
+          disk layout.
+        </para>
+
+        <para>
+          Normally the failover will check the consistency of the
+          disks before failing over the instance. If you are trying to
+          migrate instances off a dead node, this will fail. Use the
+          <option>--ignore-consistency</option> option for this
+          purpose.
+        </para>
+
+        <para>
+          Example:
+          <screen>
+# gnt-instance failover instance1.example.com
+          </screen>
+        </para>
+      </refsect3>
+
+    </refsect2>
+
+  </refsect1>
+
+  &footer;
+
+</refentry>
+
+<!-- Keep this comment at the end of the file
+Local variables:
+mode: sgml
+sgml-omittag:t
+sgml-shorttag:t
+sgml-minimize-attributes:nil
+sgml-always-quote-attributes:t
+sgml-indent-step:2
+sgml-indent-data:t
+sgml-parent-document:nil
+sgml-default-dtd-file:nil
+sgml-exposed-tags:nil
+sgml-local-catalogs:nil
+sgml-local-ecat-files:nil
+End:
+-->
diff --git a/man/gnt-node.sgml b/man/gnt-node.sgml
new file mode 100644
index 0000000000000000000000000000000000000000..f0d01196f19a35b8b04c8b7e58f85ac4d1e45215
--- /dev/null
+++ b/man/gnt-node.sgml
@@ -0,0 +1,285 @@
+<!doctype refentry PUBLIC "-//OASIS//DTD DocBook V4.1//EN" [
+
+  <!-- Fill in your name for FIRSTNAME and SURNAME. -->
+  <!-- Please adjust the date whenever revising the manpage. -->
+  <!ENTITY dhdate      "<date>June 20, 2007</date>">
+  <!-- SECTION should be 1-8, maybe w/ subsection other parameters are
+       allowed: see man(7), man(1). -->
+  <!ENTITY dhsection   "<manvolnum>8</manvolnum>">
+  <!ENTITY dhucpackage "<refentrytitle>gnt-node</refentrytitle>">
+  <!ENTITY dhpackage   "gnt-node">
+
+  <!ENTITY debian      "<productname>Debian</productname>">
+  <!ENTITY gnu         "<acronym>GNU</acronym>">
+  <!ENTITY gpl         "&gnu; <acronym>GPL</acronym>">
+  <!ENTITY footer SYSTEM "footer.sgml">
+]>
+
+<refentry>
+  <refentryinfo>
+    <copyright>
+      <year>2006</year>
+      <year>2007</year>
+      <holder>Google Inc.</holder>
+    </copyright>
+    &dhdate;
+  </refentryinfo>
+  <refmeta>
+    &dhucpackage;
+
+    &dhsection;
+    <refmiscinfo>ganeti 1.2</refmiscinfo>
+  </refmeta>
+  <refnamediv>
+    <refname>&dhpackage;</refname>
+
+    <refpurpose>node administration</refpurpose>
+  </refnamediv>
+  <refsynopsisdiv>
+    <cmdsynopsis>
+      <command>&dhpackage; </command>
+
+      <arg choice="req">command</arg>
+      <arg>arguments...</arg>
+    </cmdsynopsis>
+  </refsynopsisdiv>
+  <refsect1>
+    <title>DESCRIPTION</title>
+
+    <para>
+      The <command>&dhpackage;</command> is used for managing the
+      (physical) nodes in the ganeti system.
+    </para>
+
+  </refsect1>
+  <refsect1>
+    <title>COMMANDS</title>
+
+    <refsect2>
+      <title>ADD</title>
+
+      <cmdsynopsis>
+        <command>add</command>
+        <arg>-s <replaceable>secondary_ip</replaceable></arg>
+        <arg choice="req"><replaceable>nodename</replaceable></arg>
+      </cmdsynopsis>
+
+      <para>
+        Adds the given node to the cluster.
+      </para>
+
+      <para>
+        This command is used to join a new node to the cluster. You
+        will have to provide the password for root of the node to be
+        able to add the node in the cluster. The command needs to be
+        run on the ganeti master.
+      </para>
+
+      <para>
+        Note that the command is potentially destructive, as it will
+        forcibly join the specified host the cluster, not paying
+        attention to its current status (it could be already in a
+        cluster, etc.)
+      </para>
+
+      <para>
+        The <option>-s</option> is used in dual-home clusters and
+        specifies the new node's IP in the secondary network. See the
+        discussion in <citerefentry>
+        <refentrytitle>gnt-cluster</refentrytitle>
+        <manvolnum>8</manvolnum> </citerefentry> for more
+        informations.
+      </para>
+
+      <para>
+        Example:
+        <screen>
+# gnt-node add node5.example.com
+# gnt-node add -s 192.168.44.5 node5.example.com
+        </screen>
+      </para>
+    </refsect2>
+
+    <refsect2>
+      <title>INFO</title>
+
+      <cmdsynopsis>
+        <command>info</command>
+        <arg rep="repeat"><replaceable>node</replaceable></arg>
+      </cmdsynopsis>
+
+      <para>
+        Show detailed information about the nodes in the cluster. If you
+        don't give any arguments, all nodes will be shows, otherwise the
+        output will be restricted to the given names.
+      </para>
+    </refsect2>
+
+    <refsect2>
+      <title>LIST</title>
+
+      <cmdsynopsis>
+        <command>list</command>
+        <arg>--no-headers</arg>
+        <arg>--separator=<replaceable>SEPARATOR</replaceable></arg>
+        <arg>-o <replaceable>FIELD,...</replaceable></arg>
+      </cmdsynopsis>
+
+      <para>
+        Lists the nodes in the cluster. If you give the
+        <option>--ip-info</option> option, the output contains just
+        the node name, primary ip and secondary ip. In case the
+        secondary ip is the same as the primary one, it will be listed
+        as <emphasis>"-"</emphasis>.
+      </para>
+
+      <para>
+        The <option>--no-headers</option> option will skip the initial
+        header line. The <option>--separator</option> option takes an
+        argument which denotes what will be used between the output
+        fields. Both these options are to help scripting.
+      </para>
+
+      <para>
+        The <option>-o</option> option takes a comma-separated list of
+        output fields. The available fields and their meaning are:
+        <variablelist>
+          <varlistentry>
+            <term>name</term>
+            <listitem>
+              <simpara>the node name</simpara>
+            </listitem>
+          </varlistentry>
+          <varlistentry>
+            <term>pinst</term>
+            <listitem>
+              <simpara>the number of instances having this node as
+              primary</simpara>
+            </listitem>
+          </varlistentry>
+          <varlistentry>
+            <term>sinst</term>
+            <listitem>
+              <simpara>the number of instances having this node as a
+              secondary node</simpara>
+            </listitem>
+          </varlistentry>
+          <varlistentry>
+            <term>pip</term>
+            <listitem>
+              <simpara>the primary ip of this node (used for cluster
+              communication)</simpara>
+            </listitem>
+          </varlistentry>
+          <varlistentry>
+            <term>sip</term>
+            <listitem>
+              <simpara>
+                the secondary ip of this node (used for data
+                replication in dual-ip clusters, see <citerefentry>
+                <refentrytitle>gnt-cluster</refentrytitle>
+                <manvolnum>8</manvolnum>
+                </citerefentry>
+              </simpara>
+            </listitem>
+          </varlistentry>
+          <varlistentry>
+            <term>dtotal</term>
+            <listitem>
+              <simpara>total disk space in the volume group used for
+              instance disk allocations</simpara>
+            </listitem>
+          </varlistentry>
+          <varlistentry>
+            <term>dfree</term>
+            <listitem>
+              <simpara>available disk space in the volume group</simpara>
+            </listitem>
+          </varlistentry>
+          <varlistentry>
+            <term>mtotal</term>
+            <listitem>
+              <simpara>total memory on the physical node</simpara>
+            </listitem>
+          </varlistentry>
+          <varlistentry>
+            <term>mnode</term>
+            <listitem>
+              <simpara>the memory used by the node itself</simpara>
+            </listitem>
+          </varlistentry>
+          <varlistentry>
+            <term>mfree</term>
+            <listitem>
+              <simpara>memory available for instance
+              allocations</simpara>
+            </listitem>
+          </varlistentry>
+        </variablelist>
+      </para>
+
+      <para>
+        Note that some of this fields are known from the configuration
+        of the cluster (<simplelist type="inline">
+        <member>name</member> <member>pinst</member>
+        <member>sinst</member> <member>pip</member>
+        <member>sip</member> </simplelist> and thus the master does
+        not need to contact the node for this data (making the listing
+        fast if only fields from this set are selected), whereas the
+        other fields are "live" fields and we need to make a query to
+        the cluster nodes.
+      </para>
+
+      <para>
+        Depending on the virtualization type and implementation
+        details, the mtotal, mnode and mfree may have slighly varying
+        meanings. For example, some solutions share the node memory
+        with the pool of memory used for instances
+        (<acronym>UML</acronym>), whereas others have separate memory
+        for the node and for the instances (Xen).
+      </para>
+    </refsect2>
+
+    <refsect2>
+      <title>REMOVE</title>
+
+      <cmdsynopsis>
+        <command>remove</command>
+        <arg choice="req"><replaceable>nodename</replaceable></arg>
+      </cmdsynopsis>
+
+      <para>
+        Removes a node from the cluster. Instances must be removed or
+        migrated to another cluster before.
+      </para>
+
+      <para>
+        Example:
+        <screen>
+# gnt-node remove node5.example.com
+        </screen>
+      </para>
+    </refsect2>
+
+  </refsect1>
+
+  &footer;
+
+</refentry>
+
+<!-- Keep this comment at the end of the file
+Local variables:
+mode: sgml
+sgml-omittag:t
+sgml-shorttag:t
+sgml-minimize-attributes:nil
+sgml-always-quote-attributes:t
+sgml-indent-step:2
+sgml-indent-data:t
+sgml-parent-document:nil
+sgml-default-dtd-file:nil
+sgml-exposed-tags:nil
+sgml-local-catalogs:nil
+sgml-local-ecat-files:nil
+End:
+-->
diff --git a/man/gnt-os.sgml b/man/gnt-os.sgml
new file mode 100644
index 0000000000000000000000000000000000000000..0f41d64c6e669787d230bb5d62432884a9ab0899
--- /dev/null
+++ b/man/gnt-os.sgml
@@ -0,0 +1,103 @@
+<!doctype refentry PUBLIC "-//OASIS//DTD DocBook V4.2//EN" [
+
+  <!-- Fill in your name for FIRSTNAME and SURNAME. -->
+  <!-- Please adjust the date whenever revising the manpage. -->
+  <!ENTITY dhdate      "<date>August 10, 2006</date>">
+  <!-- SECTION should be 1-8, maybe w/ subsection other parameters are
+       allowed: see man(7), man(1). -->
+  <!ENTITY dhsection   "<manvolnum>8</manvolnum>">
+  <!ENTITY dhucpackage "<refentrytitle>gnt-os</refentrytitle>">
+  <!ENTITY dhpackage   "gnt-os">
+
+  <!ENTITY debian      "<productname>Debian</productname>">
+  <!ENTITY gnu         "<acronym>GNU</acronym>">
+  <!ENTITY gpl         "&gnu; <acronym>GPL</acronym>">
+  <!ENTITY footer SYSTEM "footer.sgml">
+]>
+
+<refentry>
+  <refentryinfo>
+    <copyright>
+      <year>2006</year>
+      <year>2007</year>
+      <holder>Google Inc.</holder>
+    </copyright>
+    &dhdate;
+  </refentryinfo>
+  <refmeta>
+    &dhucpackage;
+
+    &dhsection;
+    <refmiscinfo>ganeti 1.2</refmiscinfo>
+  </refmeta>
+  <refnamediv>
+    <refname>&dhpackage;</refname>
+
+    <refpurpose>instance operating system administration</refpurpose>
+  </refnamediv>
+  <refsynopsisdiv>
+    <cmdsynopsis>
+      <command>&dhpackage; </command>
+
+      <arg choice="req">command</arg>
+      <arg>arguments...</arg>
+    </cmdsynopsis>
+  </refsynopsisdiv>
+  <refsect1>
+    <title>DESCRIPTION</title>
+
+    <para>
+      The <command>&dhpackage;</command> is used for managing the list
+      of available operating system flavours for the instances in the
+      ganeti cluster.
+    </para>
+
+  </refsect1>
+  <refsect1>
+    <title>COMMANDS</title>
+
+    <cmdsynopsis>
+      <command>list</command>
+    </cmdsynopsis>
+
+    <para>
+      Gives the list of available/supported OS to use in the
+      instances. When creating the instance you can give the OS-name
+      as an option.
+    </para>
+
+    <cmdsynopsis>
+      <command>diagnose</command>
+    </cmdsynopsis>
+
+    <para>
+      This command will help you see why an installed OS is not
+      available in the cluster. The <command>list</command> command
+      shows only the OS-es that the cluster sees available on all
+      nodes. It could be that some OS is missing from a node, or is
+      only partially installed, and this command will show the details
+      of all the OSes and the reasons they are or are not valid.
+    </para>
+
+  </refsect1>
+
+  &footer;
+
+</refentry>
+
+<!-- Keep this comment at the end of the file
+Local variables:
+mode: sgml
+sgml-omittag:t
+sgml-shorttag:t
+sgml-minimize-attributes:nil
+sgml-always-quote-attributes:t
+sgml-indent-step:2
+sgml-indent-data:t
+sgml-parent-document:nil
+sgml-default-dtd-file:nil
+sgml-exposed-tags:nil
+sgml-local-catalogs:nil
+sgml-local-ecat-files:nil
+End:
+-->
diff --git a/scripts/Makefile.am b/scripts/Makefile.am
new file mode 100644
index 0000000000000000000000000000000000000000..bbc53e40a20c84575a13a634a358152de7734c24
--- /dev/null
+++ b/scripts/Makefile.am
@@ -0,0 +1 @@
+dist_sbin_SCRIPTS = gnt-instance gnt-cluster gnt-node gnt-os
diff --git a/scripts/gnt-cluster b/scripts/gnt-cluster
new file mode 100755
index 0000000000000000000000000000000000000000..412ceebfd4794f4844ecb696a2489e227a95b11d
--- /dev/null
+++ b/scripts/gnt-cluster
@@ -0,0 +1,243 @@
+#!/usr/bin/python
+#
+
+# Copyright (C) 2006, 2007 Google Inc.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful, but
+# WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+# General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
+# 02110-1301, USA.
+
+
+import sys
+from optparse import make_option
+import pprint
+
+from ganeti.cli import *
+from ganeti import opcodes
+
+
+def InitCluster(opts, args):
+  """Initialize the cluster.
+
+  Args:
+    opts - class with options as members
+    args - list of arguments, expected to be [clustername]
+
+  """
+  op = opcodes.OpInitCluster(cluster_name=args[0],
+                             secondary_ip=opts.secondary_ip,
+                             hypervisor_type=opts.hypervisor_type,
+                             vg_name=opts.vg_name,
+                             mac_prefix=opts.mac_prefix,
+                             def_bridge=opts.def_bridge)
+  SubmitOpCode(op)
+  return 0
+
+
+def DestroyCluster(opts, args):
+  """Destroy the cluster.
+
+  Args:
+    opts - class with options as members
+  """
+  if not opts.yes_do_it:
+    print ("Destroying a cluster is irreversibly. If you really want destroy"
+           "this cluster, supply the --yes-do-it option.")
+    return 1
+
+  op = opcodes.OpDestroyCluster()
+  SubmitOpCode(op)
+  return 0
+
+
+def ShowClusterVersion(opts, args):
+  """Write version of ganeti software to the standard output.
+
+  Args:
+    opts - class with options as members
+
+  """
+  op = opcodes.OpQueryClusterInfo()
+  result = SubmitOpCode(op)
+  print ("Software version: %s" % result["software_version"])
+  print ("Internode protocol: %s" % result["protocol_version"])
+  print ("Configuration format: %s" % result["config_version"])
+  print ("OS api version: %s" % result["os_api_version"])
+  print ("Export interface: %s" % result["export_version"])
+  return 0
+
+
+def ShowClusterMaster(opts, args):
+  """Write name of master node to the standard output.
+
+  Args:
+    opts - class with options as members
+
+  """
+  op = opcodes.OpQueryClusterInfo()
+  result = SubmitOpCode(op)
+  print (result["master"])
+  return 0
+
+
+def ShowClusterConfig(opts, args):
+  """Shows cluster information.
+
+  """
+  op = opcodes.OpQueryClusterInfo()
+  result = SubmitOpCode(op)
+
+  print ("Cluster name: %s" % result["name"])
+
+  print ("Architecture: %s (%s)" %
+            (result["architecture"][0], result["architecture"][1]))
+
+  print ("Master node: %s" % result["master"])
+
+  print ("Instances:")
+  for name, node in result["instances"]:
+    print ("  - %s (on %s)" % (name, node))
+  print ("Nodes:")
+  for name in result["nodes"]:
+    print ("  - %s" % name)
+
+  return 0
+
+
+def ClusterCopyFile(opts, args):
+  """Copy a file from master to some nodes.
+
+  Args:
+    opts - class with options as members
+    args - list containing a single element, the file name
+  Opts used:
+    nodes - list containing the name of target nodes; if empty, all nodes
+
+  """
+  op = opcodes.OpClusterCopyFile(filename=args[0], nodes=opts.nodes)
+  SubmitOpCode(op)
+  return 0
+
+
+def RunClusterCommand(opts, args):
+  """Run a command on some nodes.
+
+  Args:
+    opts - class with options as members
+    args - the command list as a list
+  Opts used:
+    nodes: list containing the name of target nodes; if empty, all nodes
+
+  """
+  command = " ".join(args)
+  nodes = opts.nodes
+  op = opcodes.OpRunClusterCommand(command=command, nodes=nodes)
+  result = SubmitOpCode(op)
+  for node, sshcommand, output, exit_code in result:
+    print ("------------------------------------------------")
+    print ("node: %s" % node)
+    print ("command: %s" % sshcommand)
+    print ("%s" % output)
+    print ("return code = %s" % exit_code)
+
+
+def VerifyCluster(opts, args):
+  """Verify integrity of cluster, performing various test on nodes.
+
+  Args:
+    opts - class with options as members
+
+  """
+  op = opcodes.OpVerifyCluster()
+  result = SubmitOpCode(op)
+  return result
+
+
+def MasterFailover(opts, args):
+  """Failover the master node.
+
+  This command, when run on a non-master node, will cause the current
+  master to cease being master, and the non-master to become new
+  master.
+
+  """
+  op = opcodes.OpMasterFailover()
+  SubmitOpCode(op)
+
+
+# this is an option common to more than one command, so we declare
+# it here and reuse it
+node_option = make_option("-n", "--node", action="append", dest="nodes",
+                          help="Node to copy to (if not given, all nodes)"
+                          ", can be given multiple times", metavar="<node>",
+                          default=[])
+
+commands = {
+  'init': (InitCluster, ARGS_ONE,
+           [DEBUG_OPT,
+            make_option("-s", "--secondary-ip", dest="secondary_ip",
+                        help="Specify the secondary ip for this node;"
+                        " if given, the entire cluster must have secondary"
+                        " addresses",
+                        metavar="ADDRESS", default=None),
+            make_option("-t", "--hypervisor-type", dest="hypervisor_type",
+                        help="Specify the hypervisor type (xen-3.0, fake)",
+                        metavar="TYPE", choices=["xen-3.0", "fake"],
+                        default="xen-3.0",),
+            make_option("-m", "--mac-prefix", dest="mac_prefix",
+                        help="Specify the mac prefix for the instance IP"
+                        " addresses, in the format XX:XX:XX",
+                        metavar="PREFIX",
+                        default="aa:00:00",),
+            make_option("-g", "--vg-name", dest="vg_name",
+                        help="Specify the volume group name "
+                        " (cluster-wide) for disk allocation [xenvg]",
+                        metavar="VG",
+                        default="xenvg",),
+            make_option("-b", "--bridge", dest="def_bridge",
+                        help="Specify the default bridge name (cluster-wide)"
+                        " to connect the instances to [xen-br0]",
+                        metavar="BRIDGE",
+                        default="xen-br0",),
+            ],
+           "[opts...] <cluster_name>",
+           "Initialises a new cluster configuration"),
+  'destroy': (DestroyCluster, ARGS_NONE,
+              [DEBUG_OPT,
+               make_option("--yes-do-it", dest="yes_do_it",
+                           help="Destroy cluster",
+                           action="store_true"),
+              ],
+              "", "Destroy cluster"),
+  'verify': (VerifyCluster, ARGS_NONE, [DEBUG_OPT],
+             "", "Does a check on the cluster configuration"),
+  'masterfailover': (MasterFailover, ARGS_NONE, [DEBUG_OPT],
+                     "", "Makes the current node the master"),
+  'version': (ShowClusterVersion, ARGS_NONE, [DEBUG_OPT],
+              "", "Shows the cluster version"),
+  'getmaster': (ShowClusterMaster, ARGS_NONE, [DEBUG_OPT],
+                "", "Shows the cluster master"),
+  'copyfile': (ClusterCopyFile, ARGS_ONE, [DEBUG_OPT, node_option],
+               "[-n node...] <filename>",
+               "Copies a file to all (or only some) nodes"),
+  'command': (RunClusterCommand, ARGS_ATLEAST(1), [DEBUG_OPT, node_option],
+              "[-n node...] <command>",
+              "Runs a command on all (or only some) nodes"),
+  'info': (ShowClusterConfig, ARGS_NONE, [DEBUG_OPT],
+                 "", "Show cluster configuration"),
+  }
+
+if __name__ == '__main__':
+  retcode = GenericMain(commands)
+  sys.exit(retcode)
diff --git a/scripts/gnt-instance b/scripts/gnt-instance
new file mode 100755
index 0000000000000000000000000000000000000000..a9fc2f81428242dbd310f0c1724594a1ec7898ca
--- /dev/null
+++ b/scripts/gnt-instance
@@ -0,0 +1,556 @@
+#!/usr/bin/python
+#
+
+# Copyright (C) 2006, 2007 Google Inc.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful, but
+# WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+# General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
+# 02110-1301, USA.
+
+
+import sys
+import os
+from optparse import make_option
+import textwrap
+from cStringIO import StringIO
+
+from ganeti.cli import *
+from ganeti import opcodes
+from ganeti import logger
+from ganeti import constants
+from ganeti import utils
+
+
+def ListInstances(opts, args):
+  """List nodes and their properties.
+
+  """
+  if opts.output is None:
+    selected_fields = ["name", "os", "pnode", "admin_state",
+                       "oper_state", "oper_ram"]
+  else:
+    selected_fields = opts.output.split(",")
+
+  op = opcodes.OpQueryInstances(output_fields=selected_fields)
+  output = SubmitOpCode(op)
+
+  mlens = [0 for name in selected_fields]
+
+  format_fields = []
+  unitformat_fields = ("admin_ram", "oper_ram")
+  for field in selected_fields:
+    if field in ("admin_ram", "oper_ram"):
+      format_fields.append("%*s")
+    else:
+      format_fields.append("%-*s")
+  separator = opts.separator
+  if "%" in separator:
+    separator = separator.replace("%", "%%")
+  format = separator.join(format_fields)
+
+  for row in output:
+    for idx, val in enumerate(row):
+      if opts.human_readable and selected_fields[idx] in unitformat_fields:
+        try:
+          val = int(val)
+        except ValueError:
+          pass
+        else:
+          val = row[idx] = utils.FormatUnit(val)
+      mlens[idx] = max(mlens[idx], len(val))
+
+  if not opts.no_headers:
+    header_list = {"name": "Instance", "os": "OS", "pnode": "Primary_node",
+                   "snodes": "Secondary_Nodes", "admin_state": "Autostart",
+                   "oper_state": "Status", "admin_ram": "Configured_memory",
+                   "oper_ram": "Memory", "disk_template": "Disk_template",
+                   "ip": "IP Address", "mac": "MAC Address",
+                   "bridge": "Bridge"}
+    args = []
+    for idx, name in enumerate(selected_fields):
+      hdr = header_list[name]
+      mlens[idx] = max(mlens[idx], len(hdr))
+      args.append(mlens[idx])
+      args.append(hdr)
+    logger.ToStdout(format % tuple(args))
+
+  for line in output:
+    args = []
+    for idx in range(len(selected_fields)):
+      args.append(mlens[idx])
+      args.append(line[idx])
+    logger.ToStdout(format % tuple(args))
+
+  return 0
+
+
+def AddInstance(opts, args):
+  """Add an instance to the cluster.
+
+  Args:
+    opts - class with options as members
+    args - list with a single element, the instance name
+  Opts used:
+    mem - amount of memory to allocate to instance (MiB)
+    size - amount of disk space to allocate to instance (MiB)
+    os - which OS to run on instance
+    node - node to run new instance on
+
+  """
+
+  instance = args[0]
+
+  op = opcodes.OpCreateInstance(instance_name=instance, mem_size=opts.mem,
+                                disk_size=opts.size, swap_size=opts.swap,
+                                disk_template=opts.disk_template,
+                                mode=constants.INSTANCE_CREATE,
+                                os_type=opts.os, pnode=opts.node,
+                                snode=opts.snode, vcpus=opts.vcpus,
+                                ip=opts.ip, bridge=opts.bridge, start=True,
+                                wait_for_sync=opts.wait_for_sync)
+  SubmitOpCode(op)
+  return 0
+
+
+def RemoveInstance(opts, args):
+  """Remove an instance.
+
+  Args:
+    opts - class with options as members
+    args - list containing a single element, the instance name
+
+  """
+  instance_name = args[0]
+  force = opts.force
+
+  if not force:
+    usertext = ("This will remove the volumes of the instance %s"
+                " (including mirrors), thus removing all the data"
+                " of the instance. Continue?") % instance_name
+    if not opts._ask_user(usertext):
+      return 1
+
+  op = opcodes.OpRemoveInstance(instance_name=instance_name)
+  SubmitOpCode(op)
+  return 0
+
+
+def ActivateDisks(opts, args):
+  """Activate an instance's disks.
+
+  This serves two purposes:
+    - it allows one (as long as the instance is not running) to mount
+    the disks and modify them from the node
+    - it repairs inactive secondary drbds
+
+  """
+  instance_name = args[0]
+  op = opcodes.OpActivateInstanceDisks(instance_name=instance_name)
+  disks_info = SubmitOpCode(op)
+  for host, iname, nname in disks_info:
+    print "%s:%s:%s" % (host, iname, nname)
+  return 0
+
+
+def DeactivateDisks(opts, args):
+  """Command-line interface for _ShutdownInstanceBlockDevices.
+
+  This function takes the instance name, looks for its primary node
+  and the tries to shutdown its block devices on that node.
+
+  """
+  instance_name = args[0]
+  op = opcodes.OpDeactivateInstanceDisks(instance_name=instance_name)
+  SubmitOpCode(op)
+  return 0
+
+
+def StartupInstance(opts, args):
+  """Shutdown an instance.
+
+  Args:
+    opts - class with options as members
+    args - list containing a single element, the instance name
+
+  """
+  instance_name = args[0]
+  op = opcodes.OpStartupInstance(instance_name=instance_name, force=opts.force,
+                                 extra_args=opts.extra_args)
+  SubmitOpCode(op)
+  return 0
+
+
+def ShutdownInstance(opts, args):
+  """Shutdown an instance.
+
+  Args:
+    opts - class with options as members
+    args - list containing a single element, the instance name
+
+  """
+  instance_name = args[0]
+  op = opcodes.OpShutdownInstance(instance_name=instance_name)
+  SubmitOpCode(op)
+  return 0
+
+
+def AddMDDRBDComponent(opts, args):
+  """Add a new component to a remote_raid1 disk.
+
+  Args:
+    opts - class with options as members
+    args - list with a single element, the instance name
+
+  """
+  op = opcodes.OpAddMDDRBDComponent(instance_name=args[0],
+                                    disk_name=opts.disk,
+                                    remote_node=opts.node)
+  SubmitOpCode(op)
+  return 0
+
+
+def RemoveMDDRBDComponent(opts, args):
+  """Connect to the console of an instance
+
+  Args:
+    opts - class with options as members
+    args - list with a single element, the instance name
+
+  """
+  op = opcodes.OpRemoveMDDRBDComponent(instance_name=args[0],
+                                       disk_name=opts.disk,
+                                       disk_id=opts.port)
+  SubmitOpCode(op)
+  return 0
+
+
+def ReplaceDisks(opts, args):
+  """Replace the disks of an instance
+
+  Args:
+    opts - class with options as members
+    args - list with a single element, the instance name
+
+  """
+  instance_name = args[0]
+  new_secondary = opts.new_secondary
+  op = opcodes.OpReplaceDisks(instance_name=args[0],
+                              remote_node=opts.new_secondary)
+  SubmitOpCode(op)
+  return 0
+
+
+def FailoverInstance(opts, args):
+  """Failover an instance.
+
+  The failover is done by shutting it down on its present node and
+  starting it on the secondary.
+
+  Args:
+    opts - class with options as members
+    args - list with a single element, the instance name
+  Opts used:
+    force - whether to failover without asking questions.
+
+  """
+  instance_name = args[0]
+  force = opts.force
+
+  if not force:
+    usertext = ("Failover will happen to image %s."
+                " This requires a shutdown of the instance. Continue?" %
+                (instance_name,))
+    usertext = textwrap.fill(usertext)
+    if not opts._ask_user(usertext):
+      return 1
+
+  op = opcodes.OpFailoverInstance(instance_name=instance_name,
+                                  ignore_consistency=opts.ignore_consistency)
+  SubmitOpCode(op)
+  return 0
+
+
+def ConnectToInstanceConsole(opts, args):
+  """Connect to the console of an instance.
+
+  Args:
+    opts - class with options as members
+    args - list with a single element, the instance name
+
+  """
+  instance_name = args[0]
+
+  op = opcodes.OpConnectConsole(instance_name=instance_name)
+  node, console_cmd = SubmitOpCode(op)
+  # drop lock and exec so other commands can run while we have console
+  utils.Unlock("cmd")
+  try:
+    os.execv("/usr/bin/ssh", ["ssh", "-qt", node, console_cmd])
+  finally:
+    sys.stderr.write("Can't run console command %s on node %s" %
+                     (console_cmd, node))
+    os._exit(1)
+
+
+def _FormatBlockDevInfo(buf, dev, indent_level):
+  """Show block device information.
+
+  This is only used by ShowInstanceConfig(), but it's too big to be
+  left for an inline definition.
+
+  """
+  def helper(buf, dtype, status):
+    """Format one line for phsyical device status."""
+    if not status:
+      buf.write("not active\n")
+    else:
+      (path, major, minor, syncp, estt, degr) = status
+      buf.write("%s (%d:%d)" % (path, major, minor))
+      if dtype in ("md_raid1", "drbd"):
+        if syncp is not None:
+          sync_text = "*RECOVERING* %5.2f%%," % syncp
+          if estt:
+            sync_text += " ETA %ds" % estt
+          else:
+            sync_text += " ETA unknown"
+        else:
+          sync_text = "in sync"
+        if degr:
+          degr_text = "*DEGRADED*"
+        else:
+          degr_text = "ok"
+        buf.write(" %s, status %s" % (sync_text, degr_text))
+      buf.write("\n")
+
+  if dev["iv_name"] is not None:
+    data = "  - %s, " % dev["iv_name"]
+  else:
+    data = "  - "
+  data += "type: %s" % dev["dev_type"]
+  if dev["logical_id"] is not None:
+    data += ", logical_id: %s" % (dev["logical_id"],)
+  elif dev["physical_id"] is not None:
+    data += ", physical_id: %s" % (dev["physical_id"],)
+  buf.write("%*s%s\n" % (2*indent_level, "", data))
+  buf.write("%*s    primary:   " % (2*indent_level, ""))
+  helper(buf, dev["dev_type"], dev["pstatus"])
+
+  if dev["sstatus"]:
+    buf.write("%*s    secondary: " % (2*indent_level, ""))
+    helper(buf, dev["dev_type"], dev["sstatus"])
+
+  if dev["children"]:
+    for child in dev["children"]:
+      _FormatBlockDevInfo(buf, child, indent_level+1)
+
+
+def ShowInstanceConfig(opts, args):
+  """Compute instance run-time status.
+
+  """
+
+  retcode = 0
+  op = opcodes.OpQueryInstanceData(instances=args)
+  result = SubmitOpCode(op)
+
+  if not result:
+    logger.ToStdout("No instances.")
+    return 1
+
+  buf = StringIO()
+  retcode = 0
+  for instance_name in result:
+    instance = result[instance_name]
+    buf.write("Instance name: %s\n" % instance["name"])
+    buf.write("State: configured to be %s, actual state is %s\n" %
+              (instance["config_state"], instance["run_state"]))
+    buf.write("  Nodes:\n")
+    buf.write("    - primary: %s\n" % instance["pnode"])
+    buf.write("    - secondaries: %s\n" % ", ".join(instance["snodes"]))
+    buf.write("  Operating system: %s\n" % instance["os"])
+    buf.write("  Hardware:\n")
+    buf.write("    - memory: %dMiB\n" % instance["memory"])
+    buf.write("    - NICs: %s\n" %
+        ", ".join(["{MAC: %s, IP: %s, bridge: %s}" %
+                   (mac, ip, bridge)
+                     for mac, ip, bridge in instance["nics"]]))
+    buf.write("  Block devices:\n")
+
+    for device in instance["disks"]:
+      _FormatBlockDevInfo(buf, device, 1)
+
+  logger.ToStdout(buf.getvalue().rstrip('\n'))
+  return retcode
+
+
+def SetInstanceParms(opts, args):
+  """Modifies an instance.
+
+  All parameters take effect only at the next restart of the instance.
+
+  Args:
+    opts - class with options as members
+    args - list with a single element, the instance name
+  Opts used:
+    memory - the new memory size
+    vcpus - the new number of cpus
+
+  """
+  if not opts.mem and not opts.vcpus and not opts.ip and not opts.bridge:
+    logger.ToStdout("Please give at least one of the parameters.")
+    return 1
+
+  op = opcodes.OpSetInstanceParms(instance_name=args[0], mem=opts.mem,
+                                  vcpus=opts.vcpus, ip=opts.ip,
+                                  bridge=opts.bridge)
+  result = SubmitOpCode(op)
+
+  if result:
+    logger.ToStdout("Modified instance %s" % args[0])
+    for param, data in result:
+      logger.ToStdout(" - %-5s -> %s" % (param, data))
+    logger.ToStdout("Please don't forget that these parameters take effect"
+                    " only at the next start of the instance.")
+  return 0
+
+
+# options used in more than one cmd
+node_opt = make_option("-n", "--node", dest="node", help="Target node",
+                       metavar="<node>")
+force_opt = make_option("-f", "--force", dest="force", action="store_true",
+                        default=False, help="Force the operation")
+
+# this is defined separately due to readability only
+add_opts = [
+  DEBUG_OPT,
+  node_opt,
+  cli_option("-s", "--os-size", dest="size", help="Disk size",
+             default=20 * 1024, type="unit", metavar="<size>"),
+  cli_option("--swap-size", dest="swap", help="Swap size",
+             default=4 * 1024, type="unit", metavar="<size>"),
+  cli_option("-o", "--os-type", dest="os", help="What OS to run",
+             metavar="<os>"),
+  cli_option("-m", "--memory", dest="mem", help="Memory size",
+              default=128, type="unit", metavar="<mem>"),
+  make_option("-p", "--cpu", dest="vcpus", help="Number of virtual CPUs",
+              default=1, type="int", metavar="<PROC>"),
+  make_option("-t", "--disk-template", dest="disk_template",
+              help="Custom disk setup (diskless, plain, local_raid1 or"
+              " remote_raid1)", default=None, metavar="TEMPL"),
+  make_option("-i", "--ip", dest="ip",
+              help="IP address ('none' [default], 'auto', or specify address)",
+              default='none', type="string", metavar="<ADDRESS>"),
+  make_option("--no-wait-for-sync", dest="wait_for_sync", default=True,
+              action="store_false", help="Don't wait for sync (DANGEROUS!)"),
+  make_option("--secondary-node", dest="snode",
+              help="Secondary node for remote_raid1 disk layout",
+              metavar="<node>"),
+  make_option("-b", "--bridge", dest="bridge",
+              help="Bridge to connect this instance to",
+              default=None, metavar="<bridge>")
+  ]
+
+
+commands = {
+  'add': (AddInstance, ARGS_ONE, add_opts,
+          "[opts...] <name>",
+          "Creates and adds a new instance to the cluster"),
+  'add-mirror': (AddMDDRBDComponent, ARGS_ONE,
+                [DEBUG_OPT, node_opt,
+                 make_option("-b", "--disk", dest="disk", metavar="sdX",
+                             help=("The name of the instance disk for which to"
+                                   " add the mirror"))],
+                "-n node -b disk <instance>",
+                "Creates a new mirror for the instance"),
+  'console': (ConnectToInstanceConsole, ARGS_ONE, [DEBUG_OPT],
+              "<instance>",
+              "Opens a console on the specified instance"),
+  'failover': (FailoverInstance, ARGS_ONE,
+               [DEBUG_OPT, force_opt,
+                make_option("--ignore-consistency", dest="ignore_consistency",
+                            action="store_true", default=False,
+                            help="Ignore the consistency of the disks on"
+                            " the secondary"),
+                ],
+               "[-f] <instance>",
+               "Stops the instance and starts it on the backup node, using"
+               " the remote mirror (only for instances of type remote_raid1)"),
+  'info': (ShowInstanceConfig, ARGS_ANY, [DEBUG_OPT], "[<instance>...]",
+           "Show information on the specified instance"),
+  'list': (ListInstances, ARGS_NONE,
+           [DEBUG_OPT, NOHDR_OPT, SEP_OPT, USEUNITS_OPT,
+            make_option("-o", "--output", dest="output", action="store",
+                        type="string", help="Select output fields",
+                        metavar="FIELDS")
+            ],
+           "", "Lists the instances and their status"),
+  'remove': (RemoveInstance, ARGS_ONE, [DEBUG_OPT, force_opt],
+             "[-f] <instance>", "Shuts down the instance and removes it"),
+  'remove-mirror': (RemoveMDDRBDComponent, ARGS_ONE,
+                   [DEBUG_OPT, node_opt,
+                    make_option("-b", "--disk", dest="disk", metavar="sdX",
+                                help=("The name of the instance disk"
+                                      " for which to add the mirror")),
+                    make_option("-p", "--port", dest="port", metavar="PORT",
+                                help=("The port of the drbd device"
+                                      " which to remove from the mirror"),
+                                type="int"),
+                    ],
+                   "-b disk -p port <instance>",
+                   "Removes a mirror from the instance"),
+  'replace-disks': (ReplaceDisks, ARGS_ONE,
+                    [DEBUG_OPT,
+                     make_option("-n", "--new-secondary", dest="new_secondary",
+                                 metavar="NODE",
+                                 help=("New secondary node (if you want to"
+                                       " change the secondary)"))],
+                    "[-n NODE] <instance>",
+                    "Replaces all disks for the instance"),
+
+  'modify': (SetInstanceParms, ARGS_ONE,
+             [DEBUG_OPT, force_opt,
+              cli_option("-m", "--memory", dest="mem",
+                         help="Memory size",
+                         default=None, type="unit", metavar="<mem>"),
+              make_option("-p", "--cpu", dest="vcpus",
+                          help="Number of virtual CPUs",
+                          default=None, type="int", metavar="<PROC>"),
+              make_option("-i", "--ip", dest="ip",
+                          help="IP address ('none' or numeric IP)",
+                          default=None, type="string", metavar="<ADDRESS>"),
+              make_option("-b", "--bridge", dest="bridge",
+                          help="Bridge to connect this instance to",
+                          default=None, type="string", metavar="<bridge>")
+              ],
+             "<instance>", "Alters the parameters of an instance"),
+  'shutdown': (ShutdownInstance, ARGS_ONE, [DEBUG_OPT],
+               "<instance>", "Stops an instance"),
+  'startup': (StartupInstance, ARGS_ONE,
+              [DEBUG_OPT, force_opt,
+               make_option("-e", "--extra", dest="extra_args",
+                           help="Extra arguments for the instance's kernel",
+                           default=None, type="string", metavar="<PARAMS>"),
+               ],
+            "<instance>", "Starts an instance"),
+  'activate-disks': (ActivateDisks, ARGS_ONE, [DEBUG_OPT],
+                     "<instance>",
+                     "Activate an instance's disks"),
+  'deactivate-disks': (DeactivateDisks, ARGS_ONE, [DEBUG_OPT],
+                       "<instance>",
+                       "Deactivate an instance's disks"),
+  }
+
+if __name__ == '__main__':
+  retcode = GenericMain(commands)
+  sys.exit(retcode)
diff --git a/scripts/gnt-node b/scripts/gnt-node
new file mode 100755
index 0000000000000000000000000000000000000000..88e8c2915b8a68603a8702f5fd9d867ce4257651
--- /dev/null
+++ b/scripts/gnt-node
@@ -0,0 +1,156 @@
+#!/usr/bin/python
+#
+
+# Copyright (C) 2006, 2007 Google Inc.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful, but
+# WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+# General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
+# 02110-1301, USA.
+
+
+import sys
+from optparse import make_option
+
+from ganeti.cli import *
+from ganeti import opcodes
+from ganeti import logger
+from ganeti import utils
+
+
+def AddNode(opts, args):
+  """Add node cli-to-processor bridge."""
+  op = opcodes.OpAddNode(node_name=args[0], secondary_ip=opts.secondary_ip)
+  SubmitOpCode(op)
+
+
+def ListNodes(opts, args):
+  """List nodes and their properties.
+
+  """
+  if opts.output is None:
+    selected_fields = ["name", "dtotal", "dfree",
+                       "mtotal", "mnode", "mfree",
+                       "pinst", "sinst"]
+  else:
+    selected_fields = opts.output.split(",")
+
+  op = opcodes.OpQueryNodes(output_fields=selected_fields)
+  output = SubmitOpCode(op)
+
+  mlens = [0 for name in selected_fields]
+  format_fields = []
+  unitformat_fields = ("dtotal", "dfree", "mtotal", "mnode", "mfree")
+  for field in selected_fields:
+    if field in ("dtotal", "dfree", "mtotal", "mnode",
+                 "mfree", "pinst", "sinst"):
+      format_fields.append("%*s")
+    else:
+      format_fields.append("%-*s")
+
+  separator = opts.separator
+  if "%" in separator:
+    separator = separator.replace("%", "%%")
+  format = separator.join(format_fields)
+
+  for row in output:
+    for idx, val in enumerate(row):
+      if opts.human_readable and selected_fields[idx] in unitformat_fields:
+        try:
+          val = int(val)
+        except ValueError:
+          pass
+        else:
+          val = row[idx] = utils.FormatUnit(val)
+      mlens[idx] = max(mlens[idx], len(val))
+
+  if not opts.no_headers:
+    header_list = {"name": "Node", "pinst": "Pinst", "sinst": "Sinst",
+                   "pip": "PrimaryIP", "sip": "SecondaryIP",
+                   "dtotal": "DTotal", "dfree": "DFree",
+                   "mtotal": "MTotal", "mnode": "MNode", "mfree": "MFree"}
+    args = []
+    for idx, name in enumerate(selected_fields):
+      hdr = header_list[name]
+      mlens[idx] = max(mlens[idx], len(hdr))
+      args.append(mlens[idx])
+      args.append(hdr)
+    logger.ToStdout(format % tuple(args))
+
+  for row in output:
+    args = []
+    for idx, val in enumerate(row):
+      args.append(mlens[idx])
+      args.append(val)
+    logger.ToStdout(format % tuple(args))
+
+  return 0
+
+
+def ShowNodeConfig(opts, args):
+  """Show node information.
+
+  """
+  op = opcodes.OpQueryNodeData(nodes=args)
+  result = SubmitOpCode(op)
+
+  for name, primary_ip, secondary_ip, pinst, sinst in result:
+    logger.ToStdout("Node name: %s" % name)
+    logger.ToStdout("  primary ip: %s" % primary_ip)
+    logger.ToStdout("  secondary ip: %s" % secondary_ip)
+    if pinst:
+      logger.ToStdout("  primary for instances:")
+      for iname in pinst:
+        logger.ToStdout("    - %s" % iname)
+    else:
+      logger.ToStdout("  primary for no instances")
+    if sinst:
+      logger.ToStdout("  secondary for instances:")
+      for iname in sinst:
+        logger.ToStdout("    - %s" % iname)
+    else:
+      logger.ToStdout("  secondary for no instances")
+
+  return 0
+
+
+def RemoveNode(opts, args):
+  """Remove node cli-to-processor bridge."""
+  op = opcodes.OpRemoveNode(node_name=args[0])
+  SubmitOpCode(op)
+
+
+commands = {
+  'add': (AddNode, ARGS_ONE,
+          [DEBUG_OPT,
+           make_option("-s", "--secondary-ip", dest="secondary_ip",
+                       help="Specify the secondary ip for the node",
+                       metavar="ADDRESS", default=None),],
+          "<node_name>", "Add a node to the cluster"),
+  'info': (ShowNodeConfig, ARGS_ANY, [DEBUG_OPT],
+           "[<node_name>...]", "Show information about the node(s)"),
+  'list': (ListNodes, ARGS_NONE,
+           [DEBUG_OPT, NOHDR_OPT, SEP_OPT, USEUNITS_OPT,
+            make_option("-o", "--output", dest="output", action="store",
+                        type="string", help="Select output fields",
+                        metavar="FIELDS")
+            ],
+           "", "Lists the nodes in the cluster"),
+  'remove': (RemoveNode, ARGS_ONE, [DEBUG_OPT],
+             "<node_name>", "Removes a node from the cluster"),
+  }
+
+
+if __name__ == '__main__':
+  retcode = GenericMain(commands)
+  sys.exit(retcode)
diff --git a/scripts/gnt-os b/scripts/gnt-os
new file mode 100755
index 0000000000000000000000000000000000000000..00cb5da570db34b83966672cd0741c8f27c3fd5c
--- /dev/null
+++ b/scripts/gnt-os
@@ -0,0 +1,139 @@
+#!/usr/bin/python
+#
+
+# Copyright (C) 2006, 2007 Google Inc.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful, but
+# WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+# General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
+# 02110-1301, USA.
+
+
+import sys
+from optparse import make_option
+
+from ganeti.cli import *
+from ganeti import opcodes
+from ganeti import logger
+from ganeti import objects
+from ganeti import utils
+from ganeti import errors
+
+def ListOS(opts, args):
+  """List the OSes existing on this node.
+
+  """
+  op = opcodes.OpDiagnoseOS()
+  result = SubmitOpCode(op)
+
+  if not result:
+    logger.ToStdout("Can't get the OS list")
+    return 1
+
+  # filter non-valid OS-es
+  oses = {}
+  for node_name in result:
+    oses[node_name] = [obj for obj in result[node_name]
+                       if isinstance(obj, objects.OS)]
+
+  fnode = oses.keys()[0]
+  os_set = set([os_inst.name for os_inst in oses[fnode]])
+  del oses[fnode]
+  for node in oses:
+    os_set &= set([os_inst.name for os_inst in oses[node]])
+
+  format = "%s"
+
+  if not opts.no_headers:
+    logger.ToStdout(format % 'Name')
+
+  for os_name in os_set:
+    logger.ToStdout(format % os_name)
+
+  return 0
+
+def DiagnoseOS(opts, args):
+  """Analyse all OSes on this cluster.
+
+  """
+  op = opcodes.OpDiagnoseOS()
+  result = SubmitOpCode(op)
+
+  if not result:
+    logger.ToStdout("Can't get the OS list")
+    return 1
+
+  format = "%-*s %-*s %s"
+
+  node_data = result
+  all_os = {}
+  for node_name in node_data:
+    nr = node_data[node_name]
+    if nr:
+      for obj in nr:
+        if isinstance(obj, objects.OS):
+          os_name = obj.name
+        else:
+          os_name = obj.args[0]
+        if os_name not in all_os:
+          all_os[os_name] = {}
+        all_os[os_name][node_name] = obj
+
+  max_name = len('Name')
+  if all_os:
+    max_name = max(max_name, max([len(name) for name in all_os]))
+
+  max_node = len('Status/Node')
+  max_node = max(max_node, max([len(name) for name in node_data]))
+
+  logger.ToStdout(format % (max_name, 'Name', max_node, 'Status/Node',
+                            'Details'))
+
+  for os_name in all_os:
+    nodes_valid = []
+    nodes_bad = {}
+    for node_name in node_data:
+      nos = all_os[os_name].get(node_name, None)
+      if isinstance(nos, objects.OS):
+        nodes_valid.append(node_name)
+      elif isinstance(nos, errors.InvalidOS):
+        nodes_bad[node_name] = nos.args[1]
+      else:
+        nodes_bad[node_name] = "os dir not found"
+
+    if nodes_valid and not nodes_bad:
+      status = "valid"
+    elif not nodes_valid and nodes_bad:
+      status = "invalid"
+    else:
+      status = "partial valid"
+    logger.ToStdout(format % (max_name, os_name, max_node, status, ""))
+    nodes_valid = utils.NiceSort(nodes_valid)
+    for node_name in nodes_valid:
+      logger.ToStdout(format % (max_name, "", max_node, node_name, "valid"))
+    nbk = utils.NiceSort(nodes_bad.keys())
+    for node_name in nbk:
+      logger.ToStdout(format % (max_name, "", max_node,
+                                node_name, nodes_bad[node_name]))
+
+
+commands = {
+  'list': (ListOS, ARGS_NONE, [DEBUG_OPT, NOHDR_OPT], "",
+           "Lists all valid OSes on the master"),
+  'diagnose': (DiagnoseOS, ARGS_NONE, [DEBUG_OPT], "",
+               "Diagnose all OSes"),
+  }
+
+if __name__ == '__main__':
+  retcode = GenericMain(commands)
+  sys.exit(retcode)
diff --git a/testing/Makefile.am b/testing/Makefile.am
new file mode 100644
index 0000000000000000000000000000000000000000..115bce2352f90745374bbcc31943b2adcb7b1f3a
--- /dev/null
+++ b/testing/Makefile.am
@@ -0,0 +1,9 @@
+TESTS = ganeti.hooks_unittest.py ganeti.utils_unittest.py
+TESTS_ENVIRONMENT = PYTHONPATH=.:$(srcdir)
+
+check_DATA = ganeti
+ganeti:
+	rm -f ganeti
+	ln -s $(top_srcdir)/lib ganeti
+
+EXTRA_DIST = $(TESTS) fake_config.py ganeti.qa.py qa-sample.yaml
diff --git a/testing/fake_config.py b/testing/fake_config.py
new file mode 100644
index 0000000000000000000000000000000000000000..d19becc23f1967a9768f958878e7006b279e924e
--- /dev/null
+++ b/testing/fake_config.py
@@ -0,0 +1,39 @@
+#!/usr/bin/python
+#
+
+# Copyright (C) 2006, 2007 Google Inc.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful, but
+# WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+# General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
+# 02110-1301, USA.
+
+
+"""Module implementing a fake ConfigWriter"""
+
+import socket
+
+class FakeConfig:
+    """Fake configuration object"""
+
+    def IsCluster(self):
+        return True
+
+    def GetClusterName(self):
+        return "test.cluster"
+
+    def GetNodeList(self):
+        return ["a", "b", "c"]
+
+    def GetMaster(self):
+        return socket.gethostname()
diff --git a/testing/ganeti.hooks_unittest.py b/testing/ganeti.hooks_unittest.py
new file mode 100755
index 0000000000000000000000000000000000000000..6651e6dc7c9badcb6071f3f4cbff159246acb7c9
--- /dev/null
+++ b/testing/ganeti.hooks_unittest.py
@@ -0,0 +1,268 @@
+#!/usr/bin/python
+#
+
+# Copyright (C) 2006, 2007 Google Inc.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful, but
+# WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+# General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
+# 02110-1301, USA.
+
+
+"""Script for unittesting the hooks module"""
+
+
+import unittest
+import os
+import time
+import tempfile
+import os.path
+
+from ganeti import errors
+from ganeti import opcodes
+from ganeti import mcpu
+from ganeti import backend
+from ganeti import constants
+from ganeti import cmdlib
+from ganeti.constants import HKR_SUCCESS, HKR_FAIL, HKR_SKIP
+
+from fake_config import FakeConfig
+
+class FakeLU(cmdlib.LogicalUnit):
+  HPATH = "test"
+  def BuildHooksEnv(self):
+    return {}, ["localhost"], ["localhost"]
+
+class TestHooksRunner(unittest.TestCase):
+  """Testing case for HooksRunner"""
+  def setUp(self):
+    self.torm = []
+    self.tmpdir = tempfile.mkdtemp()
+    self.torm.append((self.tmpdir, True))
+    self.logdir = tempfile.mkdtemp()
+    self.torm.append((self.logdir, True))
+    self.hpath = "fake"
+    self.ph_dirs = {}
+    for i in (constants.HOOKS_PHASE_PRE, constants.HOOKS_PHASE_POST):
+      dname = "%s/%s-%s.d" % (self.tmpdir, self.hpath, i)
+      os.mkdir(dname)
+      self.torm.append((dname, True))
+      self.ph_dirs[i] = dname
+    self.hr = backend.HooksRunner(hooks_base_dir=self.tmpdir)
+
+  def tearDown(self):
+    self.torm.reverse()
+    for path, kind in self.torm:
+      if kind:
+        os.rmdir(path)
+      else:
+        os.unlink(path)
+
+  def _rname(self, fname):
+    return "/".join(fname.split("/")[-2:])
+
+  def testEmpty(self):
+    """Test no hooks"""
+    for phase in (constants.HOOKS_PHASE_PRE, constants.HOOKS_PHASE_POST):
+      self.failUnlessEqual(self.hr.RunHooks(self.hpath, phase, {}), [])
+
+  def testSkipNonExec(self):
+    """Test skip non-exec file"""
+    for phase in (constants.HOOKS_PHASE_PRE, constants.HOOKS_PHASE_POST):
+      fname = "%s/test" % self.ph_dirs[phase]
+      f = open(fname, "w")
+      f.close()
+      self.torm.append((fname, False))
+      self.failUnlessEqual(self.hr.RunHooks(self.hpath, phase, {}),
+                           [(self._rname(fname), HKR_SKIP, "")])
+
+  def testSkipInvalidName(self):
+    """Test skip script with invalid name"""
+    for phase in (constants.HOOKS_PHASE_PRE, constants.HOOKS_PHASE_POST):
+      fname = "%s/a.off" % self.ph_dirs[phase]
+      f = open(fname, "w")
+      f.write("#!/bin/sh\nexit 0\n")
+      f.close()
+      os.chmod(fname, 0700)
+      self.torm.append((fname, False))
+      self.failUnlessEqual(self.hr.RunHooks(self.hpath, phase, {}),
+                           [(self._rname(fname), HKR_SKIP, "")])
+
+  def testSkipDir(self):
+    """Test skip directory"""
+    for phase in (constants.HOOKS_PHASE_PRE, constants.HOOKS_PHASE_POST):
+      fname = "%s/testdir" % self.ph_dirs[phase]
+      os.mkdir(fname)
+      self.torm.append((fname, True))
+      self.failUnlessEqual(self.hr.RunHooks(self.hpath, phase, {}),
+                           [(self._rname(fname), HKR_SKIP, "")])
+
+  def testSuccess(self):
+    """Test success execution"""
+    for phase in (constants.HOOKS_PHASE_PRE, constants.HOOKS_PHASE_POST):
+      fname = "%s/success" % self.ph_dirs[phase]
+      f = open(fname, "w")
+      f.write("#!/bin/sh\nexit 0\n")
+      f.close()
+      self.torm.append((fname, False))
+      os.chmod(fname, 0700)
+      self.failUnlessEqual(self.hr.RunHooks(self.hpath, phase, {}),
+                           [(self._rname(fname), HKR_SUCCESS, "")])
+
+  def testSymlink(self):
+    """Test running a symlink"""
+    for phase in (constants.HOOKS_PHASE_PRE, constants.HOOKS_PHASE_POST):
+      fname = "%s/success" % self.ph_dirs[phase]
+      os.symlink("/bin/true", fname)
+      self.torm.append((fname, False))
+      self.failUnlessEqual(self.hr.RunHooks(self.hpath, phase, {}),
+                           [(self._rname(fname), HKR_SUCCESS, "")])
+
+  def testFail(self):
+    """Test success execution"""
+    for phase in (constants.HOOKS_PHASE_PRE, constants.HOOKS_PHASE_POST):
+      fname = "%s/success" % self.ph_dirs[phase]
+      f = open(fname, "w")
+      f.write("#!/bin/sh\nexit 1\n")
+      f.close()
+      self.torm.append((fname, False))
+      os.chmod(fname, 0700)
+      self.failUnlessEqual(self.hr.RunHooks(self.hpath, phase, {}),
+                           [(self._rname(fname), HKR_FAIL, "")])
+
+  def testCombined(self):
+    """Test success, failure and skip all in one test"""
+    for phase in (constants.HOOKS_PHASE_PRE, constants.HOOKS_PHASE_POST):
+      expect = []
+      for fbase, ecode, rs in [("00succ", 0, HKR_SUCCESS),
+                               ("10fail", 1, HKR_FAIL),
+                               ("20inv.", 0, HKR_SKIP),
+                               ]:
+        fname = "%s/%s" % (self.ph_dirs[phase], fbase)
+        f = open(fname, "w")
+        f.write("#!/bin/sh\nexit %d\n" % ecode)
+        f.close()
+        self.torm.append((fname, False))
+        os.chmod(fname, 0700)
+        expect.append((self._rname(fname), rs, ""))
+      self.failUnlessEqual(self.hr.RunHooks(self.hpath, phase, {}), expect)
+
+  def testOrdering(self):
+    for phase in (constants.HOOKS_PHASE_PRE, constants.HOOKS_PHASE_POST):
+      expect = []
+      for fbase in ["10s1",
+                    "00s0",
+                    "10sa",
+                    "80sc",
+                    "60sd",
+                    ]:
+        fname = "%s/%s" % (self.ph_dirs[phase], fbase)
+        os.symlink("/bin/true", fname)
+        self.torm.append((fname, False))
+        expect.append((self._rname(fname), HKR_SUCCESS, ""))
+      expect.sort()
+      self.failUnlessEqual(self.hr.RunHooks(self.hpath, phase, {}), expect)
+
+  def testEnv(self):
+    """Test environment execution"""
+    for phase in (constants.HOOKS_PHASE_PRE, constants.HOOKS_PHASE_POST):
+      fbase = "success"
+      fname = "%s/%s" % (self.ph_dirs[phase], fbase)
+      os.symlink("/usr/bin/env", fname)
+      self.torm.append((fname, False))
+      env_snt = {"PHASE": phase}
+      env_exp = "PHASE=%s\n" % phase
+      self.failUnlessEqual(self.hr.RunHooks(self.hpath, phase, env_snt),
+                           [(self._rname(fname), HKR_SUCCESS, env_exp)])
+
+
+class TestHooksMaster(unittest.TestCase):
+  """Testing case for HooksMaster"""
+
+  def _call_false(*args):
+    """Fake call_hooks_runner function which returns False."""
+    return False
+
+  @staticmethod
+  def _call_nodes_false(node_list, hpath, phase, env):
+    """Fake call_hooks_runner function.
+
+    Returns:
+      - list of False values with the same len as the node_list argument
+
+    """
+    return [False for node_name in node_list]
+
+  @staticmethod
+  def _call_script_fail(node_list, hpath, phase, env):
+    """Fake call_hooks_runner function.
+
+    Returns:
+      - list of False values with the same len as the node_list argument
+
+    """
+    return dict([(node_name, [("unittest", constants.HKR_FAIL, "error")])
+                 for node_name in node_list])
+
+  @staticmethod
+  def _call_script_succeed(node_list, hpath, phase, env):
+    """Fake call_hooks_runner function.
+
+    Returns:
+      - list of False values with the same len as the node_list argument
+
+    """
+    return dict([(node_name, [("unittest", constants.HKR_SUCCESS, "ok")])
+                 for node_name in node_list])
+
+  def testTotalFalse(self):
+    """Test complete rpc failure"""
+    cfg = FakeConfig()
+    op = opcodes.OpCode()
+    lu = FakeLU(None, op, cfg, None)
+    hm = mcpu.HooksMaster(self._call_false, cfg, lu)
+    self.failUnlessRaises(errors.HooksFailure,
+                          hm.RunPhase, constants.HOOKS_PHASE_PRE)
+    hm.RunPhase(constants.HOOKS_PHASE_POST)
+
+  def testIndividualFalse(self):
+    """Test individual rpc failure"""
+    cfg = FakeConfig()
+    op = opcodes.OpCode()
+    lu = FakeLU(None, op, cfg, None)
+    hm = mcpu.HooksMaster(self._call_nodes_false, cfg, lu)
+    self.failUnlessRaises(errors.HooksFailure,
+                          hm.RunPhase, constants.HOOKS_PHASE_PRE)
+    hm.RunPhase(constants.HOOKS_PHASE_POST)
+
+  def testScriptFalse(self):
+    """Test individual rpc failure"""
+    cfg = FakeConfig()
+    op = opcodes.OpCode()
+    lu = FakeLU(None, op, cfg, None)
+    hm = mcpu.HooksMaster(self._call_script_fail, cfg, lu)
+    self.failUnlessRaises(errors.HooksAbort,
+                          hm.RunPhase, constants.HOOKS_PHASE_PRE)
+    hm.RunPhase(constants.HOOKS_PHASE_POST)
+
+  def testScriptSucceed(self):
+    """Test individual rpc failure"""
+    cfg = FakeConfig()
+    op = opcodes.OpCode()
+    lu = FakeLU(None, op, cfg, None)
+    hm = mcpu.HooksMaster(self._call_script_succeed, cfg, lu)
+    for phase in (constants.HOOKS_PHASE_PRE, constants.HOOKS_PHASE_POST):
+      hm.RunPhase(phase)
+
+if __name__ == '__main__':
+  unittest.main()
diff --git a/testing/ganeti.qa.py b/testing/ganeti.qa.py
new file mode 100755
index 0000000000000000000000000000000000000000..f0604cfe43dbacfa738b678fabbc2d56b70cd6b0
--- /dev/null
+++ b/testing/ganeti.qa.py
@@ -0,0 +1,691 @@
+#!/usr/bin/python
+#
+
+# Copyright (C) 2006, 2007 Google Inc.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful, but
+# WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+# General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
+# 02110-1301, USA.
+
+
+"""Script for doing Q&A on Ganeti"""
+
+import os
+import re
+import sys
+import yaml
+import time
+
+from datetime import datetime
+from optparse import OptionParser
+
+# I want more flexibility for testing over SSH, therefore I'm not using
+# Ganeti's ssh module.
+import subprocess
+
+from ganeti import utils
+from ganeti import constants
+
+# {{{ Global variables
+cfg = None
+options = None
+# }}}
+
+# {{{ Errors
+class Error(Exception):
+  """An error occurred during Q&A testing.
+
+  """
+  pass
+
+
+class OutOfNodesError(Error):
+  """Out of nodes.
+
+  """
+  pass
+
+
+class OutOfInstancesError(Error):
+  """Out of instances.
+
+  """
+  pass
+# }}}
+
+# {{{ Utilities
+def TestEnabled(test):
+  """Returns True if the given test is enabled."""
+  return cfg.get('tests', {}).get(test, False)
+
+
+def RunTest(callable, *args):
+  """Runs a test after printing a header.
+
+  """
+  if callable.__doc__:
+    desc = callable.__doc__.splitlines()[0].strip()
+  else:
+    desc = '%r' % callable
+
+  now = str(datetime.now())
+
+  print
+  print '---', now, ('-' * (55 - len(now)))
+  print desc
+  print '-' * 60
+
+  return callable(*args)
+
+
+def AssertEqual(first, second, msg=None):
+  """Raises an error when values aren't equal.
+
+  """
+  if not first == second:
+    raise Error, (msg or '%r == %r' % (first, second))
+
+
+def GetSSHCommand(node, cmd, strict=True):
+  """Builds SSH command to be executed.
+
+  """
+  args = [ 'ssh', '-oEscapeChar=none', '-oBatchMode=yes', '-l', 'root' ]
+
+  if strict:
+    tmp = 'yes'
+  else:
+    tmp = 'no'
+  args.append('-oStrictHostKeyChecking=%s' % tmp)
+  args.append(node)
+
+  if options.dry_run:
+    prefix = 'exit 0; '
+  else:
+    prefix = ''
+
+  args.append(prefix + cmd)
+
+  if options.verbose:
+    print 'SSH:', utils.ShellQuoteArgs(args)
+
+  return args
+
+
+def StartSSH(node, cmd, strict=True):
+  """Starts SSH.
+
+  """
+  args = GetSSHCommand(node, cmd, strict=strict)
+  return subprocess.Popen(args, shell=False)
+
+
+def UploadFile(node, file):
+  """Uploads a file to a node and returns the filename.
+
+  Caller needs to remove the file when it's not needed anymore.
+  """
+  if os.stat(file).st_mode & 0100:
+    mode = '0700'
+  else:
+    mode = '0600'
+
+  cmd = ('tmp=$(tempfile --mode %s --prefix gnt) && '
+         '[[ -f "${tmp}" ]] && '
+         'cat > "${tmp}" && '
+         'echo "${tmp}"') % mode
+
+  f = open(file, 'r')
+  try:
+    p = subprocess.Popen(GetSSHCommand(node, cmd), shell=False, stdin=f,
+                         stdout=subprocess.PIPE)
+    AssertEqual(p.wait(), 0)
+
+    name = p.stdout.read().strip()
+
+    return name
+  finally:
+    f.close()
+# }}}
+
+# {{{ Config helpers
+def GetMasterNode():
+  return cfg['nodes'][0]
+
+
+def AcquireInstance():
+  """Returns an instance which isn't in use.
+
+  """
+  # Filter out unwanted instances
+  tmp_flt = lambda inst: not inst.get('_used', False)
+  instances = filter(tmp_flt, cfg['instances'])
+  del tmp_flt
+
+  if len(instances) == 0:
+    raise OutOfInstancesError, ("No instances left")
+
+  inst = instances[0]
+  inst['_used'] = True
+  return inst
+
+
+def ReleaseInstance(inst):
+  inst['_used'] = False
+
+
+def AcquireNode(exclude=None):
+  """Returns the least used node.
+
+  """
+  master = GetMasterNode()
+
+  # Filter out unwanted nodes
+  # TODO: Maybe combine filters
+  if exclude is None:
+    nodes = cfg['nodes'][:]
+  else:
+    nodes = filter(lambda node: node != exclude, cfg['nodes'])
+
+  tmp_flt = lambda node: node.get('_added', False) or node == master
+  nodes = filter(tmp_flt, nodes)
+  del tmp_flt
+
+  if len(nodes) == 0:
+    raise OutOfNodesError, ("No nodes left")
+
+  # Get node with least number of uses
+  def compare(a, b):
+    result = cmp(a.get('_count', 0), b.get('_count', 0))
+    if result == 0:
+      result = cmp(a['primary'], b['primary'])
+    return result
+
+  nodes.sort(cmp=compare)
+
+  node = nodes[0]
+  node['_count'] = node.get('_count', 0) + 1
+  return node
+
+
+def ReleaseNode(node):
+  node['_count'] = node.get('_count', 0) - 1
+# }}}
+
+# {{{ Environment tests
+def TestConfig():
+  """Test configuration for sanity.
+
+  """
+  if len(cfg['nodes']) < 1:
+    raise Error, ("Need at least one node")
+  if len(cfg['instances']) < 1:
+    raise Error, ("Need at least one instance")
+  # TODO: Add more checks
+
+
+def TestSshConnection():
+  """Test SSH connection.
+
+  """
+  for node in cfg['nodes']:
+    AssertEqual(StartSSH(node['primary'], 'exit').wait(), 0)
+
+
+def TestGanetiCommands():
+  """Test availibility of Ganeti commands.
+
+  """
+  cmds = ( ['gnt-cluster', '--version'],
+           ['gnt-os', '--version'],
+           ['gnt-node', '--version'],
+           ['gnt-instance', '--version'],
+           ['gnt-backup', '--version'],
+           ['ganeti-noded', '--version'],
+           ['ganeti-watcher', '--version'] )
+
+  cmd = ' && '.join(map(utils.ShellQuoteArgs, cmds))
+
+  for node in cfg['nodes']:
+    AssertEqual(StartSSH(node['primary'], cmd).wait(), 0)
+
+
+def TestIcmpPing():
+  """ICMP ping each node.
+
+  """
+  for node in cfg['nodes']:
+    check = []
+    for i in cfg['nodes']:
+      check.append(i['primary'])
+      if i.has_key('secondary'):
+        check.append(i['secondary'])
+
+    ping = lambda ip: utils.ShellQuoteArgs(['ping', '-w', '3', '-c', '1', ip])
+    cmd = ' && '.join(map(ping, check))
+
+    AssertEqual(StartSSH(node['primary'], cmd).wait(), 0)
+# }}}
+
+# {{{ Cluster tests
+def TestClusterInit():
+  """gnt-cluster init"""
+  master = GetMasterNode()
+
+  cmd = ['gnt-cluster', 'init']
+  if master.get('secondary', None):
+    cmd.append('--secondary-ip=%s' % master['secondary'])
+  cmd.append(cfg['name'])
+
+  AssertEqual(StartSSH(master['primary'],
+                       utils.ShellQuoteArgs(cmd)).wait(), 0)
+
+
+def TestClusterVerify():
+  """gnt-cluster verify"""
+  cmd = ['gnt-cluster', 'verify']
+  AssertEqual(StartSSH(GetMasterNode()['primary'],
+                       utils.ShellQuoteArgs(cmd)).wait(), 0)
+
+
+def TestClusterInfo():
+  """gnt-cluster info"""
+  cmd = ['gnt-cluster', 'info']
+  AssertEqual(StartSSH(GetMasterNode()['primary'],
+                       utils.ShellQuoteArgs(cmd)).wait(), 0)
+
+
+def TestClusterBurnin():
+  """Burnin"""
+  master = GetMasterNode()
+
+  # Get as many instances as we need
+  instances = []
+  try:
+    for _ in xrange(0, cfg.get('options', {}).get('burnin-instances', 1)):
+      instances.append(AcquireInstance())
+  except OutOfInstancesError:
+    print "Not enough instances, continuing anyway."
+
+  if len(instances) < 1:
+    raise Error, ("Burnin needs at least one instance")
+
+  # Run burnin
+  try:
+    script = UploadFile(master['primary'], '../tools/burnin')
+    try:
+      cmd = [script, '--os=%s' % cfg['os']]
+      cmd += map(lambda inst: inst['name'], instances)
+      AssertEqual(StartSSH(master['primary'],
+                           utils.ShellQuoteArgs(cmd)).wait(), 0)
+    finally:
+      cmd = ['rm', '-f', script]
+      AssertEqual(StartSSH(master['primary'],
+                           utils.ShellQuoteArgs(cmd)).wait(), 0)
+  finally:
+    for inst in instances:
+      ReleaseInstance(inst)
+
+
+def TestClusterMasterFailover():
+  """gnt-cluster masterfailover"""
+  master = GetMasterNode()
+
+  failovermaster = AcquireNode(exclude=master)
+  try:
+    cmd = ['gnt-cluster', 'masterfailover']
+    AssertEqual(StartSSH(failovermaster['primary'],
+                         utils.ShellQuoteArgs(cmd)).wait(), 0)
+
+    cmd = ['gnt-cluster', 'masterfailover']
+    AssertEqual(StartSSH(master['primary'],
+                         utils.ShellQuoteArgs(cmd)).wait(), 0)
+  finally:
+    ReleaseNode(failovermaster)
+
+
+def TestClusterDestroy():
+  """gnt-cluster destroy"""
+  cmd = ['gnt-cluster', 'destroy', '--yes-do-it']
+  AssertEqual(StartSSH(GetMasterNode()['primary'],
+                       utils.ShellQuoteArgs(cmd)).wait(), 0)
+# }}}
+
+# {{{ Node tests
+def _NodeAdd(node):
+  if node.get('_added', False):
+    raise Error, ("Node %s already in cluster" % node['primary'])
+
+  cmd = ['gnt-node', 'add']
+  if node.get('secondary', None):
+    cmd.append('--secondary-ip=%s' % node['secondary'])
+  cmd.append(node['primary'])
+  AssertEqual(StartSSH(GetMasterNode()['primary'],
+                       utils.ShellQuoteArgs(cmd)).wait(), 0)
+
+  node['_added'] = True
+
+
+def TestNodeAddAll():
+  """Adding all nodes to cluster."""
+  master = GetMasterNode()
+  for node in cfg['nodes']:
+    if node != master:
+      _NodeAdd(node)
+
+
+def _NodeRemove(node):
+  cmd = ['gnt-node', 'remove', node['primary']]
+  AssertEqual(StartSSH(GetMasterNode()['primary'],
+                       utils.ShellQuoteArgs(cmd)).wait(), 0)
+  node['_added'] = False
+
+
+def TestNodeRemoveAll():
+  """Removing all nodes from cluster."""
+  master = GetMasterNode()
+  for node in cfg['nodes']:
+    if node != master:
+      _NodeRemove(node)
+# }}}
+
+# {{{ Instance tests
+def _DiskTest(node, instance, args):
+  cmd = ['gnt-instance', 'add',
+         '--os-type=%s' % cfg['os'],
+         '--os-size=%s' % cfg['os-size'],
+         '--swap-size=%s' % cfg['swap-size'],
+         '--memory=%s' % cfg['mem'],
+         '--node=%s' % node['primary']]
+  if args:
+    cmd += args
+  cmd.append(instance['name'])
+
+  AssertEqual(StartSSH(GetMasterNode()['primary'],
+                       utils.ShellQuoteArgs(cmd)).wait(), 0)
+  return instance
+
+
+def TestInstanceAddWithPlainDisk(node):
+  """gnt-instance add -t plain"""
+  return _DiskTest(node, AcquireInstance(), ['--disk-template=plain'])
+
+
+def TestInstanceAddWithLocalMirrorDisk(node):
+  """gnt-instance add -t local_raid1"""
+  return _DiskTest(node, AcquireInstance(), ['--disk-template=local_raid1'])
+
+
+def TestInstanceAddWithRemoteRaidDisk(node, node2):
+  """gnt-instance add -t remote_raid1"""
+  return _DiskTest(node, AcquireInstance(),
+                   ['--disk-template=remote_raid1',
+                    '--secondary-node=%s' % node2['primary']])
+
+
+def TestInstanceRemove(instance):
+  """gnt-instance remove"""
+  cmd = ['gnt-instance', 'remove', '-f', instance['name']]
+  AssertEqual(StartSSH(GetMasterNode()['primary'],
+                       utils.ShellQuoteArgs(cmd)).wait(), 0)
+
+  ReleaseInstance(instance)
+
+
+def TestInstanceStartup(instance):
+  """gnt-instance startup"""
+  cmd = ['gnt-instance', 'startup', instance['name']]
+  AssertEqual(StartSSH(GetMasterNode()['primary'],
+                       utils.ShellQuoteArgs(cmd)).wait(), 0)
+
+
+def TestInstanceShutdown(instance):
+  """gnt-instance shutdown"""
+  cmd = ['gnt-instance', 'shutdown', instance['name']]
+  AssertEqual(StartSSH(GetMasterNode()['primary'],
+                       utils.ShellQuoteArgs(cmd)).wait(), 0)
+
+
+def TestInstanceFailover(instance):
+  """gnt-instance failover"""
+  cmd = ['gnt-instance', 'failover', '--force', instance['name']]
+  AssertEqual(StartSSH(GetMasterNode()['primary'],
+                       utils.ShellQuoteArgs(cmd)).wait(), 0)
+# }}}
+
+# {{{ Daemon tests
+def _ResolveInstanceName(instance):
+  """Gets the full Xen name of an instance.
+
+  """
+  master = GetMasterNode()
+
+  info_cmd = utils.ShellQuoteArgs(['gnt-instance', 'info', instance['name']])
+  sed_cmd = utils.ShellQuoteArgs(['sed', '-n', '-e', 's/^Instance name: *//p'])
+
+  cmd = '%s | %s' % (info_cmd, sed_cmd)
+  p = subprocess.Popen(GetSSHCommand(master['primary'], cmd), shell=False,
+                       stdout=subprocess.PIPE)
+  AssertEqual(p.wait(), 0)
+
+  return p.stdout.read().strip()
+
+
+def _InstanceRunning(node, name):
+  """Checks whether an instance is running.
+
+  Args:
+    node: Node the instance runs on
+    name: Full name of Xen instance
+  """
+  cmd = utils.ShellQuoteArgs(['xm', 'list', name]) + ' >/dev/null'
+  ret = StartSSH(node['primary'], cmd).wait()
+  return ret == 0
+
+
+def _XmShutdownInstance(node, name):
+  """Shuts down instance using "xm" and waits for completion.
+
+  Args:
+    node: Node the instance runs on
+    name: Full name of Xen instance
+  """
+  cmd = ['xm', 'shutdown', name]
+  AssertEqual(StartSSH(GetMasterNode()['primary'],
+                       utils.ShellQuoteArgs(cmd)).wait(), 0)
+
+  # Wait up to a minute
+  end = time.time() + 60
+  while time.time() <= end:
+    if not _InstanceRunning(node, name):
+      break
+    time.sleep(5)
+  else:
+    raise Error, ("xm shutdown failed")
+
+
+def _ResetWatcherDaemon(node):
+  """Removes the watcher daemon's state file.
+
+  Args:
+    node: Node to be reset
+  """
+  cmd = ['rm', '-f', constants.WATCHER_STATEFILE]
+  AssertEqual(StartSSH(node['primary'],
+                       utils.ShellQuoteArgs(cmd)).wait(), 0)
+
+
+def TestInstanceAutomaticRestart(node, instance):
+  """Test automatic restart of instance by ganeti-watcher.
+
+  Note: takes up to 6 minutes to complete.
+  """
+  master = GetMasterNode()
+  inst_name = _ResolveInstanceName(instance)
+
+  _ResetWatcherDaemon(node)
+  _XmShutdownInstance(node, inst_name)
+
+  # Give it a bit more than five minutes to start again
+  restart_at = time.time() + 330
+
+  # Wait until it's running again
+  while time.time() <= restart_at:
+    if _InstanceRunning(node, inst_name):
+      break
+    time.sleep(15)
+  else:
+    raise Error, ("Daemon didn't restart instance in time")
+
+  cmd = ['gnt-instance', 'info', inst_name]
+  AssertEqual(StartSSH(master['primary'],
+                       utils.ShellQuoteArgs(cmd)).wait(), 0)
+
+
+def TestInstanceConsecutiveFailures(node, instance):
+  """Test five consecutive instance failures.
+
+  Note: takes at least 35 minutes to complete.
+  """
+  master = GetMasterNode()
+  inst_name = _ResolveInstanceName(instance)
+
+  _ResetWatcherDaemon(node)
+  _XmShutdownInstance(node, inst_name)
+
+  # Do shutdowns for 30 minutes
+  finished_at = time.time() + (35 * 60)
+
+  while time.time() <= finished_at:
+    if _InstanceRunning(node, inst_name):
+      _XmShutdownInstance(node, inst_name)
+    time.sleep(30)
+
+  # Check for some time whether the instance doesn't start again
+  check_until = time.time() + 330
+  while time.time() <= check_until:
+    if _InstanceRunning(node, inst_name):
+      raise Error, ("Instance started when it shouldn't")
+    time.sleep(30)
+
+  cmd = ['gnt-instance', 'info', inst_name]
+  AssertEqual(StartSSH(master['primary'],
+                       utils.ShellQuoteArgs(cmd)).wait(), 0)
+# }}}
+
+# {{{ Main program
+if __name__ == '__main__':
+  # {{{ Option parsing
+  parser = OptionParser(usage="%prog [options] <configfile>")
+  parser.add_option('--cleanup', dest='cleanup',
+      action="store_true",
+      help="Clean up cluster after testing?")
+  parser.add_option('--dry-run', dest='dry_run',
+      action="store_true",
+      help="Show what would be done")
+  parser.add_option('--verbose', dest='verbose',
+      action="store_true",
+      help="Verbose output")
+  parser.add_option('--yes-do-it', dest='yes_do_it',
+      action="store_true",
+      help="Really execute the tests")
+  (options, args) = parser.parse_args()
+  # }}}
+
+  if len(args) == 1:
+    config_file = args[0]
+  else:
+    raise SyntaxError, ("Exactly one configuration file is expected")
+
+  if not options.yes_do_it:
+    print ("Executing this script irreversibly destroys any Ganeti\n"
+           "configuration on all nodes involved. If you really want\n"
+           "to start testing, supply the --yes-do-it option.")
+    sys.exit(1)
+
+  f = open(config_file, 'r')
+  try:
+    cfg = yaml.load(f.read())
+  finally:
+    f.close()
+
+  RunTest(TestConfig)
+
+  if TestEnabled('env'):
+    RunTest(TestSshConnection)
+    RunTest(TestIcmpPing)
+    RunTest(TestGanetiCommands)
+
+  RunTest(TestClusterInit)
+
+  if TestEnabled('cluster-verify'):
+    RunTest(TestClusterVerify)
+    RunTest(TestClusterInfo)
+
+  RunTest(TestNodeAddAll)
+
+  if TestEnabled('cluster-burnin'):
+    RunTest(TestClusterBurnin)
+
+  if TestEnabled('cluster-master-failover'):
+    RunTest(TestClusterMasterFailover)
+
+  node = AcquireNode()
+  try:
+    if TestEnabled('instance-add-plain-disk'):
+      instance = RunTest(TestInstanceAddWithPlainDisk, node)
+      RunTest(TestInstanceShutdown, instance)
+      RunTest(TestInstanceStartup, instance)
+
+      if TestEnabled('instance-automatic-restart'):
+        RunTest(TestInstanceAutomaticRestart, node, instance)
+
+      if TestEnabled('instance-consecutive-failures'):
+        RunTest(TestInstanceConsecutiveFailures, node, instance)
+
+      RunTest(TestInstanceRemove, instance)
+      del instance
+
+    if TestEnabled('instance-add-local-mirror-disk'):
+      instance = RunTest(TestInstanceAddWithLocalMirrorDisk, node)
+      RunTest(TestInstanceShutdown, instance)
+      RunTest(TestInstanceStartup, instance)
+      RunTest(TestInstanceRemove, instance)
+      del instance
+
+    if TestEnabled('instance-add-remote-raid-disk'):
+      node2 = AcquireNode(exclude=node)
+      try:
+        instance = RunTest(TestInstanceAddWithRemoteRaidDisk, node, node2)
+        RunTest(TestInstanceShutdown, instance)
+        RunTest(TestInstanceStartup, instance)
+
+        if TestEnabled('instance-failover'):
+          RunTest(TestInstanceFailover, instance)
+
+        RunTest(TestInstanceRemove, instance)
+        del instance
+      finally:
+        ReleaseNode(node2)
+
+  finally:
+    ReleaseNode(node)
+
+  RunTest(TestNodeRemoveAll)
+
+  if TestEnabled('cluster-destroy'):
+    RunTest(TestClusterDestroy)
+# }}}
+
+# vim: foldmethod=marker :
diff --git a/testing/ganeti.utils_unittest.py b/testing/ganeti.utils_unittest.py
new file mode 100755
index 0000000000000000000000000000000000000000..947697d04d25510470c81a049f39019243081d1a
--- /dev/null
+++ b/testing/ganeti.utils_unittest.py
@@ -0,0 +1,415 @@
+#!/usr/bin/python
+#
+
+# Copyright (C) 2006, 2007 Google Inc.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful, but
+# WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+# General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
+# 02110-1301, USA.
+
+
+"""Script for unittesting the utils module"""
+
+import unittest
+import os
+import time
+import tempfile
+import os.path
+import md5
+
+import ganeti
+from ganeti.utils import IsProcessAlive, Lock, Unlock, RunCmd, \
+     RemoveFile, CheckDict, MatchNameComponent, FormatUnit, \
+     ParseUnit, AddAuthorizedKey, RemoveAuthorizedKey, \
+     ShellQuote, ShellQuoteArgs
+from ganeti.errors import LockError, UnitParseError
+
+class TestIsProcessAlive(unittest.TestCase):
+  """Testing case for IsProcessAlive"""
+  def setUp(self):
+    # create a zombie and a (hopefully) non-existing process id
+    self.pid_zombie = os.fork()
+    if self.pid_zombie == 0:
+      os._exit(0)
+    elif self.pid_zombie < 0:
+      raise SystemError("can't fork")
+    self.pid_non_existing = os.fork()
+    if self.pid_non_existing == 0:
+      os._exit(0)
+    elif self.pid_non_existing > 0:
+      os.waitpid(self.pid_non_existing, 0)
+    else:
+      raise SystemError("can't fork")
+
+
+  def testExists(self):
+    mypid = os.getpid()
+    self.assert_(IsProcessAlive(mypid),
+                 "can't find myself running")
+
+  def testZombie(self):
+    self.assert_(not IsProcessAlive(self.pid_zombie),
+                 "zombie not detected as zombie")
+
+
+  def testNotExisting(self):
+    self.assert_(not IsProcessAlive(self.pid_non_existing),
+                 "noexisting process detected")
+
+
+class TestLocking(unittest.TestCase):
+  """Testing case for the Lock/Unlock functions"""
+  def clean_lock(self, name):
+    try:
+      ganeti.utils.Unlock("unittest")
+    except LockError:
+      pass
+
+
+  def testLock(self):
+    self.clean_lock("unittest")
+    self.assertEqual(None, Lock("unittest"))
+
+
+  def testUnlock(self):
+    self.clean_lock("unittest")
+    ganeti.utils.Lock("unittest")
+    self.assertEqual(None, Unlock("unittest"))
+
+
+  def testDoubleLock(self):
+    self.clean_lock("unittest")
+    ganeti.utils.Lock("unittest")
+    self.assertRaises(LockError, Lock, "unittest")
+
+
+class TestRunCmd(unittest.TestCase):
+  """Testing case for the RunCmd function"""
+
+  def setUp(self):
+    self.magic = time.ctime() + " ganeti test"
+
+  def testOk(self):
+    """Test successfull exit code"""
+    result = RunCmd("/bin/sh -c 'exit 0'")
+    self.assertEqual(result.exit_code, 0)
+
+  def testFail(self):
+    """Test fail exit code"""
+    result = RunCmd("/bin/sh -c 'exit 1'")
+    self.assertEqual(result.exit_code, 1)
+
+
+  def testStdout(self):
+    """Test standard output"""
+    cmd = 'echo -n "%s"' % self.magic
+    result = RunCmd("/bin/sh -c '%s'" % cmd)
+    self.assertEqual(result.stdout, self.magic)
+
+
+  def testStderr(self):
+    """Test standard error"""
+    cmd = 'echo -n "%s"' % self.magic
+    result = RunCmd("/bin/sh -c '%s' 1>&2" % cmd)
+    self.assertEqual(result.stderr, self.magic)
+
+
+  def testCombined(self):
+    """Test combined output"""
+    cmd = 'echo -n "A%s"; echo -n "B%s" 1>&2' % (self.magic, self.magic)
+    result = RunCmd("/bin/sh -c '%s'" % cmd)
+    self.assertEqual(result.output, "A" + self.magic + "B" + self.magic)
+
+  def testSignal(self):
+    """Test standard error"""
+    result = RunCmd("/bin/sh -c 'kill -15 $$'")
+    self.assertEqual(result.signal, 15)
+
+
+class TestRemoveFile(unittest.TestCase):
+  """Test case for the RemoveFile function"""
+
+  def setUp(self):
+    """Create a temp dir and file for each case"""
+    self.tmpdir = tempfile.mkdtemp('', 'ganeti-unittest-')
+    fd, self.tmpfile = tempfile.mkstemp('', '', self.tmpdir)
+    os.close(fd)
+
+  def tearDown(self):
+    if os.path.exists(self.tmpfile):
+      os.unlink(self.tmpfile)
+    os.rmdir(self.tmpdir)
+
+
+  def testIgnoreDirs(self):
+    """Test that RemoveFile() ignores directories"""
+    self.assertEqual(None, RemoveFile(self.tmpdir))
+
+
+  def testIgnoreNotExisting(self):
+    """Test that RemoveFile() ignores non-existing files"""
+    RemoveFile(self.tmpfile)
+    RemoveFile(self.tmpfile)
+
+
+  def testRemoveFile(self):
+    """Test that RemoveFile does remove a file"""
+    RemoveFile(self.tmpfile)
+    if os.path.exists(self.tmpfile):
+      self.fail("File '%s' not removed" % self.tmpfile)
+
+
+  def testRemoveSymlink(self):
+    """Test that RemoveFile does remove symlinks"""
+    symlink = self.tmpdir + "/symlink"
+    os.symlink("no-such-file", symlink)
+    RemoveFile(symlink)
+    if os.path.exists(symlink):
+      self.fail("File '%s' not removed" % symlink)
+    os.symlink(self.tmpfile, symlink)
+    RemoveFile(symlink)
+    if os.path.exists(symlink):
+      self.fail("File '%s' not removed" % symlink)
+
+
+class TestCheckdict(unittest.TestCase):
+  """Test case for the CheckDict function"""
+
+  def testAdd(self):
+    """Test that CheckDict adds a missing key with the correct value"""
+
+    tgt = {'a':1}
+    tmpl = {'b': 2}
+    CheckDict(tgt, tmpl)
+    if 'b' not in tgt or tgt['b'] != 2:
+      self.fail("Failed to update dict")
+
+
+  def testNoUpdate(self):
+    """Test that CheckDict does not overwrite an existing key"""
+    tgt = {'a':1, 'b': 3}
+    tmpl = {'b': 2}
+    CheckDict(tgt, tmpl)
+    self.failUnlessEqual(tgt['b'], 3)
+
+
+class TestMatchNameComponent(unittest.TestCase):
+  """Test case for the MatchNameComponent function"""
+
+  def testEmptyList(self):
+    """Test that there is no match against an empty list"""
+
+    self.failUnlessEqual(MatchNameComponent("", []), None)
+    self.failUnlessEqual(MatchNameComponent("test", []), None)
+
+  def testSingleMatch(self):
+    """Test that a single match is performed correctly"""
+    mlist = ["test1.example.com", "test2.example.com", "test3.example.com"]
+    for key in "test2", "test2.example", "test2.example.com":
+      self.failUnlessEqual(MatchNameComponent(key, mlist), mlist[1])
+
+  def testMultipleMatches(self):
+    """Test that a multiple match is returned as None"""
+    mlist = ["test1.example.com", "test1.example.org", "test1.example.net"]
+    for key in "test1", "test1.example":
+      self.failUnlessEqual(MatchNameComponent(key, mlist), None)
+
+
+class TestFormatUnit(unittest.TestCase):
+  """Test case for the FormatUnit function"""
+
+  def testMiB(self):
+    self.assertEqual(FormatUnit(1), '1M')
+    self.assertEqual(FormatUnit(100), '100M')
+    self.assertEqual(FormatUnit(1023), '1023M')
+
+  def testGiB(self):
+    self.assertEqual(FormatUnit(1024), '1.0G')
+    self.assertEqual(FormatUnit(1536), '1.5G')
+    self.assertEqual(FormatUnit(17133), '16.7G')
+    self.assertEqual(FormatUnit(1024 * 1024 - 1), '1024.0G')
+
+  def testTiB(self):
+    self.assertEqual(FormatUnit(1024 * 1024), '1.0T')
+    self.assertEqual(FormatUnit(5120 * 1024), '5.0T')
+    self.assertEqual(FormatUnit(29829 * 1024), '29.1T')
+
+
+class TestParseUnit(unittest.TestCase):
+  """Test case for the ParseUnit function"""
+
+  SCALES = (('', 1),
+            ('M', 1), ('G', 1024), ('T', 1024 * 1024),
+            ('MB', 1), ('GB', 1024), ('TB', 1024 * 1024),
+            ('MiB', 1), ('GiB', 1024), ('TiB', 1024 * 1024))
+
+  def testRounding(self):
+    self.assertEqual(ParseUnit('0'), 0)
+    self.assertEqual(ParseUnit('1'), 4)
+    self.assertEqual(ParseUnit('2'), 4)
+    self.assertEqual(ParseUnit('3'), 4)
+
+    self.assertEqual(ParseUnit('124'), 124)
+    self.assertEqual(ParseUnit('125'), 128)
+    self.assertEqual(ParseUnit('126'), 128)
+    self.assertEqual(ParseUnit('127'), 128)
+    self.assertEqual(ParseUnit('128'), 128)
+    self.assertEqual(ParseUnit('129'), 132)
+    self.assertEqual(ParseUnit('130'), 132)
+
+  def testFloating(self):
+    self.assertEqual(ParseUnit('0'), 0)
+    self.assertEqual(ParseUnit('0.5'), 4)
+    self.assertEqual(ParseUnit('1.75'), 4)
+    self.assertEqual(ParseUnit('1.99'), 4)
+    self.assertEqual(ParseUnit('2.00'), 4)
+    self.assertEqual(ParseUnit('2.01'), 4)
+    self.assertEqual(ParseUnit('3.99'), 4)
+    self.assertEqual(ParseUnit('4.00'), 4)
+    self.assertEqual(ParseUnit('4.01'), 8)
+    self.assertEqual(ParseUnit('1.5G'), 1536)
+    self.assertEqual(ParseUnit('1.8G'), 1844)
+    self.assertEqual(ParseUnit('8.28T'), 8682212)
+
+  def testSuffixes(self):
+    for sep in ('', ' ', '   ', "\t", "\t "):
+      for suffix, scale in TestParseUnit.SCALES:
+        for func in (lambda x: x, str.lower, str.upper):
+          self.assertEqual(ParseUnit('1024' + sep + func(suffix)), 1024 * scale)
+
+  def testInvalidInput(self):
+    for sep in ('-', '_', ',', 'a'):
+      for suffix, _ in TestParseUnit.SCALES:
+        self.assertRaises(UnitParseError, ParseUnit, '1' + sep + suffix)
+
+    for suffix, _ in TestParseUnit.SCALES:
+      self.assertRaises(UnitParseError, ParseUnit, '1,3' + suffix)
+
+
+class TestSshKeys(unittest.TestCase):
+  """Test case for the AddAuthorizedKey function"""
+
+  KEY_A = 'ssh-dss AAAAB3NzaC1w5256closdj32mZaQU root@key-a'
+  KEY_B = ('command="/usr/bin/fooserver -t --verbose",from="1.2.3.4" '
+           'ssh-dss AAAAB3NzaC1w520smc01ms0jfJs22 root@key-b')
+
+  # NOTE: The MD5 sums below were calculated after manually
+  #       checking the output files.
+
+  def writeTestFile(self):
+    (fd, tmpname) = tempfile.mkstemp(prefix = 'ganeti-test')
+    f = os.fdopen(fd, 'w')
+    try:
+      f.write(TestSshKeys.KEY_A)
+      f.write("\n")
+      f.write(TestSshKeys.KEY_B)
+      f.write("\n")
+    finally:
+      f.close()
+
+    return tmpname
+
+  def testAddingNewKey(self):
+    tmpname = self.writeTestFile()
+    try:
+      AddAuthorizedKey(tmpname, 'ssh-dss AAAAB3NzaC1kc3MAAACB root@test')
+
+      f = open(tmpname, 'r')
+      try:
+        self.assertEqual(md5.new(f.read(8192)).hexdigest(),
+                         'ccc71523108ca6e9d0343797dc3e9f16')
+      finally:
+        f.close()
+    finally:
+      os.unlink(tmpname)
+
+  def testAddingAlmostButNotCompletlyTheSameKey(self):
+    tmpname = self.writeTestFile()
+    try:
+      AddAuthorizedKey(tmpname,
+          'ssh-dss AAAAB3NzaC1w5256closdj32mZaQU root@test')
+
+      f = open(tmpname, 'r')
+      try:
+        self.assertEqual(md5.new(f.read(8192)).hexdigest(),
+                         'f2c939d57addb5b3a6846884be896b46')
+      finally:
+        f.close()
+    finally:
+      os.unlink(tmpname)
+
+  def testAddingExistingKeyWithSomeMoreSpaces(self):
+    tmpname = self.writeTestFile()
+    try:
+      AddAuthorizedKey(tmpname,
+          'ssh-dss  AAAAB3NzaC1w5256closdj32mZaQU   root@key-a')
+
+      f = open(tmpname, 'r')
+      try:
+        self.assertEqual(md5.new(f.read(8192)).hexdigest(),
+                         '4e612764808bd46337eb0f575415fc30')
+      finally:
+        f.close()
+    finally:
+      os.unlink(tmpname)
+
+  def testRemovingExistingKeyWithSomeMoreSpaces(self):
+    tmpname = self.writeTestFile()
+    try:
+      RemoveAuthorizedKey(tmpname,
+          'ssh-dss  AAAAB3NzaC1w5256closdj32mZaQU   root@key-a')
+
+      f = open(tmpname, 'r')
+      try:
+        self.assertEqual(md5.new(f.read(8192)).hexdigest(),
+                         '77516d987fca07f70e30b830b3e4f2ed')
+      finally:
+        f.close()
+    finally:
+      os.unlink(tmpname)
+
+  def testRemovingNonExistingKey(self):
+    tmpname = self.writeTestFile()
+    try:
+      RemoveAuthorizedKey(tmpname,
+          'ssh-dss  AAAAB3Nsdfj230xxjxJjsjwjsjdjU   root@test')
+
+      f = open(tmpname, 'r')
+      try:
+        self.assertEqual(md5.new(f.read(8192)).hexdigest(),
+                         '4e612764808bd46337eb0f575415fc30')
+      finally:
+        f.close()
+    finally:
+      os.unlink(tmpname)
+
+
+class TestShellQuoting(unittest.TestCase):
+  """Test case for shell quoting functions"""
+
+  def testShellQuote(self):
+    self.assertEqual(ShellQuote('abc'), "abc")
+    self.assertEqual(ShellQuote('ab"c'), "'ab\"c'")
+    self.assertEqual(ShellQuote("a'bc"), "'a'\\''bc'")
+    self.assertEqual(ShellQuote("a b c"), "'a b c'")
+    self.assertEqual(ShellQuote("a b\\ c"), "'a b\\ c'")
+
+  def testShellQuoteArgs(self):
+    self.assertEqual(ShellQuoteArgs(['a', 'b', 'c']), "a b c")
+    self.assertEqual(ShellQuoteArgs(['a', 'b"', 'c']), "a 'b\"' c")
+    self.assertEqual(ShellQuoteArgs(['a', 'b\'', 'c']), "a 'b'\\\''' c")
+
+
+if __name__ == '__main__':
+  unittest.main()
diff --git a/testing/qa-sample.yaml b/testing/qa-sample.yaml
new file mode 100644
index 0000000000000000000000000000000000000000..ace46ee41642b5d43c730ab714d4380bc86e0c5d
--- /dev/null
+++ b/testing/qa-sample.yaml
@@ -0,0 +1,47 @@
+# Cluster name
+name: xen-test
+
+# System to use
+os: debian-edgy
+os-size: 10G
+swap-size: 1G
+mem: 512M
+
+# Nodes to use
+nodes:
+# Master node
+- primary: xen-test-0
+  secondary: 192.168.1.1
+
+# Other nodes
+- primary: xen-test-1
+  secondary: 192.168.1.2
+
+# Instance names to use
+instances:
+- name: xen-test-inst1
+- name: xen-test-inst2
+
+# Tests to run
+tests:
+  env: True
+
+  cluster-verify: True
+  cluster-burnin: True
+  cluster-master-failover: True
+  cluster-destroy: True
+
+  instance-add-plain-disk: True
+  instance-add-local-mirror-disk: True
+  instance-add-remote-raid-disk: True
+  instance-failover: True
+
+  # This test takes up to 6 minutes to complete
+  instance-automatic-restart: False
+
+  # This test takes at least 35 minutes to complete
+  instance-consecutive-failures: False
+
+# Other settings
+options:
+  burnin-instances: 2
diff --git a/tools/Makefile.am b/tools/Makefile.am
new file mode 100644
index 0000000000000000000000000000000000000000..12995fa7f645a7a1d3803eaf58f0990f0fd78e88
--- /dev/null
+++ b/tools/Makefile.am
@@ -0,0 +1 @@
+dist_pkgdata_SCRIPTS = lvmstrap burnin cfgshell
diff --git a/tools/burnin b/tools/burnin
new file mode 100755
index 0000000000000000000000000000000000000000..aa3b25c5133724bdb89390917a423f82d3f00c52
--- /dev/null
+++ b/tools/burnin
@@ -0,0 +1,162 @@
+#!/usr/bin/python
+#
+
+import sys
+import optparse
+
+from ganeti import opcodes
+from ganeti import mcpu
+from ganeti import objects
+from ganeti import constants
+from ganeti import cli
+from ganeti import logger
+
+USAGE = ("\tburnin [options] instance_name ...")
+
+def Usage():
+  """Shows program usage information and exits the program."""
+
+  print >> sys.stderr, "Usage:"
+  print >> sys.stderr, USAGE
+  sys.exit(2)
+
+
+def Feedback(msg):
+    print msg
+
+
+def ParseOptions():
+  """Parses the command line options.
+
+  In case of command line errors, it will show the usage and exit the
+  program.
+
+  Returns:
+    (options, args), as returned by OptionParser.parse_args
+  """
+
+  parser = optparse.OptionParser(usage="\n%s" % USAGE,
+                                 version="%%prog (ganeti) %s" %
+                                 constants.RELEASE_VERSION,
+                                 option_class=cli.CliOption)
+
+  parser.add_option("-o", "--os", dest="os", default=None,
+                    help="OS to use during burnin",
+                    metavar="<OS>")
+  parser.add_option("--os-size", dest="os_size", help="Disk size",
+                    default=4 * 1024, type="unit", metavar="<size>")
+  parser.add_option("--swap-size", dest="swap_size", help="Swap size",
+                    default=4 * 1024, type="unit", metavar="<size>")
+  parser.add_option("-v", "--verbose",
+                    action="store_true", dest="verbose", default=False,
+                    help="print command execution messages to stdout")
+
+  options, args = parser.parse_args()
+  if len(args) < 1:
+    Usage()
+
+  return options, args
+
+
+def BurninCluster(opts, args):
+  """Test a cluster intensively.
+
+  This will create instances and then start/stop/failover them.
+  It is safe for existing instances but could impact performance.
+
+  """
+
+  logger.SetupLogging(debug=True, program="ganeti/burnin")
+  proc = mcpu.Processor()
+  result = proc.ExecOpCode(opcodes.OpQueryNodes(output_fields=["name"]),
+                             Feedback)
+  nodelist = [data[0] for data in result]
+
+  Feedback("- Testing global parameters")
+
+  result = proc.ExecOpCode(opcodes.OpDiagnoseOS(), Feedback)
+
+  if not result:
+    Feedback("Can't get the OS list")
+    return 1
+
+  # filter non-valid OS-es
+  oses = {}
+  for node_name in result:
+    oses[node_name] = [obj for obj in result[node_name]
+                       if isinstance(obj, objects.OS)]
+
+  fnode = oses.keys()[0]
+  os_set = set([os_inst.name for os_inst in oses[fnode]])
+  del oses[fnode]
+  for node in oses:
+    os_set &= set([os_inst.name for os_inst in oses[node]])
+
+  if opts.os not in os_set:
+    Feedback("OS not found")
+    return 1
+
+  to_remove = []
+  try:
+    idx = 0
+    for instance_name in args:
+      next_idx = idx + 1
+      if next_idx >= len(nodelist):
+        next_idx = 0
+      pnode = nodelist[idx]
+      snode = nodelist[next_idx]
+      if len(nodelist) > 1:
+        tplate = constants.DT_REMOTE_RAID1
+      else:
+        tplate = constants.DT_PLAIN
+
+      op = opcodes.OpCreateInstance(instance_name=instance_name, mem_size=128,
+                                    disk_size=opts.os_size,
+                                    swap_size=opts.swap_size,
+                                    disk_template=tplate,
+                                    mode=constants.INSTANCE_CREATE,
+                                    os_type=opts.os, pnode=pnode,
+                                    snode=snode, vcpus=1,
+                                    start=True,
+                                    wait_for_sync=True)
+      Feedback("- Add instance %s on node %s" % (instance_name, pnode))
+      result = proc.ExecOpCode(op, Feedback)
+      to_remove.append(instance_name)
+      idx = next_idx
+
+
+    if len(nodelist) > 1:
+      # failover
+      for instance_name in args:
+        op = opcodes.OpFailoverInstance(instance_name=instance_name,
+                                        ignore_consistency=True)
+
+        Feedback("- Failover instance %s" % (instance_name))
+        result = proc.ExecOpCode(op, Feedback)
+
+    # stop / start
+    for instance_name in args:
+      op = opcodes.OpShutdownInstance(instance_name=instance_name)
+      Feedback("- Shutdown instance %s" % instance_name)
+      result = proc.ExecOpCode(op, Feedback)
+      op = opcodes.OpStartupInstance(instance_name=instance_name, force=False)
+      Feedback("- Start instance %s" % instance_name)
+      result = proc.ExecOpCode(op, Feedback)
+
+  finally:
+    # remove
+    for instance_name in to_remove:
+      op = opcodes.OpRemoveInstance(instance_name=instance_name)
+      Feedback("- Remove instance %s" % instance_name)
+      result = proc.ExecOpCode(op, Feedback)
+
+  return 0
+
+def main():
+    """Main function"""
+
+    opts, args = ParseOptions()
+    return BurninCluster(opts, args)
+
+if __name__ == "__main__":
+    main()
diff --git a/tools/cfgshell b/tools/cfgshell
new file mode 100755
index 0000000000000000000000000000000000000000..253d32a3a9baed1fff33301013de66b575ba15e2
--- /dev/null
+++ b/tools/cfgshell
@@ -0,0 +1,357 @@
+#!/usr/bin/python
+#
+
+# Copyright (C) 2006, 2007 Google Inc.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful, but
+# WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+# General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
+# 02110-1301, USA.
+
+
+"""Tool to do manual changes to the config file.
+
+"""
+
+
+import os
+import sys
+import optparse
+import time
+import cmd
+
+try:
+  import readline
+  _wd = readline.get_completer_delims()
+  _wd = _wd.replace("-", "")
+  readline.set_completer_delims(_wd)
+  del _wd
+except ImportError:
+  pass
+
+from ganeti import errors
+from ganeti import config
+from ganeti import objects
+
+
+class ConfigShell(cmd.Cmd):
+  """Command tool for editing the config file.
+
+  Note that although we don't do saves after remove, the current
+  ConfigWriter code does that; so we can't prevent someone from
+  actually breaking the config with this tool. It's the users'
+  responsibility to know what they're doing.
+
+  """
+  prompt = "(/) "
+
+  def __init__(self, cfg_file=None):
+    """Constructor for the ConfigShell object.
+
+    The optional cfg_file argument will be used to load a config file
+    at startup.
+
+    """
+    cmd.Cmd.__init__(self)
+    self.cfg = self.cluster_name = None
+    self.parents = []
+    self.path = []
+    if cfg_file:
+      self.do_load(cfg_file)
+      self.postcmd(False, "")
+
+  def emptyline(self):
+    """Empty line handling.
+
+    Note that the default will re-run the last command. We don't want
+    that, and just ignore the empty line.
+
+    """
+    return False
+
+  @staticmethod
+  def _get_entries(obj):
+    """Computes the list of subdirs and files in the given object.
+
+    This, depending on the passed object entry, look at each logical
+    child of the object and decides if it's a container or a simple
+    object. Based on this, it computes the list of subdir and files.
+
+    """
+    dirs = []
+    entries = []
+    if isinstance(obj, objects.ConfigObject):
+      for name in obj.__slots__:
+        child = getattr(obj, name, None)
+        if isinstance(child, (list, dict, tuple, objects.ConfigObject)):
+          dirs.append(name)
+        else:
+          entries.append(name)
+    elif isinstance(obj, (list, tuple)):
+      for idx, child in enumerate(obj):
+        if isinstance(child, (list, dict, tuple, objects.ConfigObject)):
+          dirs.append(str(idx))
+        else:
+          entries.append(str(idx))
+    elif isinstance(obj, dict):
+      dirs = obj.keys()
+
+    return dirs, entries
+
+  def precmd(self, line):
+    """Precmd hook to prevent commands in invalid states.
+
+    This will prevent everything except load and quit when no
+    configuration is loaded.
+
+    """
+    if line.startswith("load") or line == 'EOF' or line == "quit":
+      return line
+    if not self.parents or self.cfg is None:
+      print "No config data loaded"
+      return ""
+    return line
+
+  def postcmd(self, stop, line):
+    """Postcmd hook to update the prompt.
+
+    We show the current location in the prompt and this function is
+    used to update it; this is only needed after cd and load, but we
+    update it anyway.
+
+    """
+    if self.cfg is None:
+      self.prompt = "(#no config) "
+    else:
+      self.prompt = "(%s:/%s) " % (self.cluster_name, "/".join(self.path))
+    return stop
+
+  def do_load(self, line):
+    """Load function.
+
+    Syntax: load [/path/to/config/file]
+
+    This will load a new configuration, discarding any existing data
+    (if any). If no argument has been passed, it will use the default
+    config file location.
+
+    """
+    if line:
+      arg = line
+    else:
+      arg = None
+    try:
+      self.cfg = config.ConfigWriter(cfg_file=arg, offline=True)
+      self.cfg._OpenConfig()
+      self.parents = [self.cfg._config_data]
+      self.path = []
+      self.cluster_name = self.cfg.GetClusterName()
+    except errors.ConfigurationError, err:
+      print "Error: %s" % str(err)
+    return False
+
+  def do_ls(self, line):
+    """List the current entry.
+
+    This will show directories with a slash appended and files
+    normally.
+
+    """
+    dirs, entries = self._get_entries(self.parents[-1])
+    for i in dirs:
+      print i + "/"
+    for i in entries:
+      print i
+    return False
+
+  def complete_cd(self, text, line, begidx, endidx):
+    """Completion function for the cd command.
+
+    """
+    pointer = self.parents[-1]
+    dirs, entries = self._get_entries(pointer)
+    matches = [str(name) for name in dirs if name.startswith(text)]
+    return matches
+
+  def do_cd(self, line):
+    """Changes the current path.
+
+    Valid arguments: either .. or a child of the current object.
+
+    """
+    if line == "..":
+      if self.path:
+        self.path.pop()
+        self.parents.pop()
+        return False
+      else:
+        print "Already at top level"
+        return False
+
+    pointer = self.parents[-1]
+    dirs, entries = self._get_entries(pointer)
+
+    if line not in dirs:
+      print "No such child"
+      return False
+    if isinstance(pointer, (dict, list, tuple)):
+      if isinstance(pointer, (list, tuple)):
+        line = int(line)
+      new_obj = pointer[line]
+    else:
+      new_obj = getattr(pointer, line)
+    self.parents.append(new_obj)
+    self.path.append(str(line))
+    return False
+
+  def do_pwd(self, line):
+    """Shows the current path.
+
+    This duplicates the prompt functionality, but it's reasonable to
+    have.
+
+    """
+    print "/" + "/".join(self.path)
+    return False
+
+  def complete_cat(self, text, line, begidx, endidx):
+    """Completion for the cat command.
+
+    """
+    pointer = self.parents[-1]
+    dirs, entries = self._get_entries(pointer)
+    matches = [name for name in entries if name.startswith(text)]
+    return matches
+
+  def do_cat(self, line):
+    """Shows the contents of the given file.
+
+    This will display the contents of the given file, which must be a
+    child of the current path (as shows by `ls`).
+
+    """
+    pointer = self.parents[-1]
+    dirs, entries = self._get_entries(pointer)
+    if line not in entries:
+      print "No such entry"
+      return False
+
+    if isinstance(pointer, (dict, list, tuple)):
+      if isinstance(pointer, (list, tuple)):
+        line = int(line)
+      val = pointer[line]
+    else:
+      val = getattr(pointer, line)
+    print val
+    return False
+
+  def do_verify(self, line):
+    """Verify the configuration.
+
+    This verifies the contents of the configuration file (and not the
+    in-memory data, as every modify operation automatically saves the
+    file).
+
+    """
+    vdata = self.cfg.VerifyConfig()
+    if vdata:
+      print "Validation failed. Errors:"
+      for text in vdata:
+        print text
+    return False
+
+  def do_save(self, line):
+    """Saves the configuration data.
+
+    Note that is redundant (all modify operations automatically save
+    the data), but it is good to use it as in the future that could
+    change.
+
+    """
+    if self.cfg.VerifyConfig():
+      print "Config data does not validate, refusing to save."
+      return False
+    self.cfg._WriteConfig()
+
+  def do_rm(self, line):
+    """Removes an instance or a node.
+
+    This function works only on instances or nodes. You must be in
+    either `/nodes` or `/instances` and give a valid argument.
+
+    """
+    pointer = self.parents[-1]
+    data = self.cfg._config_data
+    if pointer not in (data.instances, data.nodes):
+      print "Can only delete instances and nodes"
+      return False
+    if pointer == data.instances:
+      if line in data.instances:
+        self.cfg.RemoveInstance(line)
+      else:
+        print "Invalid instance name"
+    else:
+      if line in data.nodes:
+        self.cfg.RemoveNode(line)
+      else:
+        print "Invalid node name"
+
+  def do_EOF(self, line):
+    print
+    return True
+
+  def do_quit(self, line):
+    """Exit the application.
+
+    """
+    print
+    return True
+
+class Error(Exception):
+  """Generic exception"""
+  pass
+
+
+def ParseOptions():
+  """Parses the command line options.
+
+  In case of command line errors, it will show the usage and exit the
+  program.
+
+  Returns:
+    (options, args), as returned by OptionParser.parse_args
+  """
+
+  parser = optparse.OptionParser()
+
+  options, args = parser.parse_args()
+
+  return options, args
+
+
+def main():
+  """Application entry point.
+
+  This is just a wrapper over BootStrap, to handle our own exceptions.
+  """
+  options, args = ParseOptions()
+  if args:
+    cfg_file = args[0]
+  else:
+    cfg_file = None
+  shell = ConfigShell(cfg_file=cfg_file)
+  shell.cmdloop()
+
+
+if __name__ == "__main__":
+  main()
diff --git a/tools/lvmstrap b/tools/lvmstrap
new file mode 100755
index 0000000000000000000000000000000000000000..199f1ee1d669aff85f74940f6c3eab11602a40a3
--- /dev/null
+++ b/tools/lvmstrap
@@ -0,0 +1,770 @@
+#!/usr/bin/python
+#
+
+# Copyright (C) 2006, 2007 Google Inc.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful, but
+# WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+# General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
+# 02110-1301, USA.
+
+
+"""Program which configures LVM on the Ganeti nodes.
+
+This program wipes disks and creates a volume group on top of them. It
+can also show disk information to help you decide which disks you want
+to wipe.
+
+The error handling is done by raising our own exceptions from most of
+the functions; these exceptions then handled globally in the main()
+function. The exceptions that each function can raise are not
+documented individually, since almost every error path ends in a
+raise.
+
+Another two exceptions that are handled globally are IOError and
+OSError. The idea behind this is, since we run as root, we should
+usually not get these errors, but if we do it's most probably a system
+error, so they should be handled and the user instructed to report
+them.
+"""
+
+
+import os
+import sys
+import optparse
+import time
+
+from ganeti.utils import RunCmd
+from ganeti import constants
+
+USAGE = ("\tlvmstrap.py diskinfo\n"
+         "\tlvmstrap.py [--vgname=NAME] { --alldisks | --disks DISKLIST }"
+         " create")
+
+verbose_flag = False
+
+
+class Error(Exception):
+  """Generic exception"""
+  pass
+
+
+class ProgrammingError(Error):
+  """Exception denoting invalid assumptions in programming.
+
+  This should catch sysfs tree changes, or otherwise incorrect
+  assumptions about the contents of the /sys/block/... directories.
+  """
+  pass
+
+
+class SysconfigError(Error):
+  """Exception denoting invalid system configuration.
+
+  If the system configuration is somehow wrong (e.g. /dev files
+  missing, or having mismatched major/minor numbers relative to
+  /sys/block devices), this exception will be raised.
+
+  This should usually mean that the installation of the Xen node
+  failed in some steps.
+  """
+  pass
+
+
+class PrereqError(Error):
+  """Exception denoting invalid prerequisites.
+
+  If the node does not meet the requirements for cluster membership, this
+  exception will be raised. Things like wrong kernel version, or no
+  free disks, etc. belong here.
+
+  This should usually mean that the build steps for the Xen node were
+  not followed correctly.
+  """
+  pass
+
+
+class OperationalError(Error):
+  """Exception denoting actual errors.
+
+  Errors during the bootstrapping are signaled using this exception.
+  """
+  pass
+
+
+class ParameterError(Error):
+  """Exception denoting invalid input from user.
+
+  Wrong disks given as parameters will be signaled using this
+  exception.
+  """
+  pass
+
+def Usage():
+  """Shows program usage information and exits the program."""
+
+  print >> sys.stderr, "Usage:"
+  print >> sys.stderr, USAGE
+  sys.exit(2)
+
+
+def ParseOptions():
+  """Parses the command line options.
+
+  In case of command line errors, it will show the usage and exit the
+  program.
+
+  Returns:
+    (options, args), as returned by OptionParser.parse_args
+  """
+  global verbose_flag
+
+  parser = optparse.OptionParser(usage="\n%s" % USAGE,
+                                 version="%%prog (ganeti) %s" %
+                                 constants.RELEASE_VERSION)
+
+  parser.add_option("--alldisks", dest="alldisks",
+                    help="erase ALL disks", action="store_true",
+                    default=False)
+  parser.add_option("-d", "--disks", dest="disks",
+                    help="Choose disks (e.g. hda,hdg)",
+                    metavar="DISKLIST")
+  parser.add_option("-v", "--verbose",
+                    action="store_true", dest="verbose", default=False,
+                    help="print command execution messages to stdout")
+  parser.add_option("-g", "--vg-name", type="string",
+                    dest="vgname", default="xenvg", metavar="NAME",
+                    help="the volume group to be created [default: xenvg]")
+
+
+  options, args = parser.parse_args()
+  if len(args) != 1:
+    Usage()
+
+  verbose_flag = options.verbose
+
+  return options, args
+
+
+def ExecCommand(command):
+  """Executes a command.
+
+  This is just a wrapper around commands.getstatusoutput, with the
+  difference that if the command line argument -v has been given, it
+  will print the command line and the command output on stdout.
+
+  Args:
+    the command line
+  Returns:
+    (status, output) where status is the exit status and output the
+      stdout and stderr of the command together
+  """
+
+  if verbose_flag:
+    print command
+  result = RunCmd(command)
+  if verbose_flag:
+    print result.output
+  return result
+
+
+def CheckPrereq():
+  """Check the prerequisites of this program.
+
+  It check that it runs on Linux 2.6, and that /sys is mounted and the
+  fact that /sys/block is a directory.
+  """
+
+  if os.getuid() != 0:
+    raise PrereqError("This tool runs as root only. Really.")
+
+  osname, nodename, release, version, arch = os.uname()
+  if osname != 'Linux':
+    raise PrereqError("This tool only runs on Linux "
+                      "(detected OS: %s)." % osname)
+
+  if not release.startswith("2.6."):
+    raise PrereqError("Wrong major kernel version (detected %s, needs "
+                      "2.6.*)" % release)
+
+  if not os.path.ismount("/sys"):
+    raise PrereqError("Can't find a filesystem mounted at /sys. "
+                      "Please mount /sys.")
+
+  if not os.path.isdir("/sys/block"):
+    raise SysconfigError("Can't find /sys/block directory. Has the "
+                         "layout of /sys changed?")
+
+  if not os.path.ismount("/proc"):
+    raise PrereqError("Can't find a filesystem mounted at /proc. "
+                      "Please mount /proc.")
+
+  if not os.path.exists("/proc/mounts"):
+    raise SysconfigError("Can't find /proc/mounts")
+
+
+def CheckVGExists(vgname):
+  """Checks to see if a volume group exists.
+
+  Args:
+    vgname: the volume group name
+
+  Returns:
+    a four-tuple (exists, lv_count, vg_size, vg_free), where:
+      exists: True if the volume exists, otherwise False; if False,
+        all other members of the tuple are None
+      lv_count: The number of logical volumes in the volume group
+      vg_size: The total size of the volume group (in gibibytes)
+      vg_free: The available space in the volume group
+  """
+
+  result = ExecCommand("vgs --nohead -o lv_count,vg_size,"
+                       "vg_free --nosuffix --units g "
+                       "--ignorelockingfailure %s" % vgname)
+  if not result.failed:
+    try:
+      lv_count, vg_size, vg_free = result.stdout.strip().split()
+    except ValueError:
+      # This means the output of vgdisplay can't be parsed
+      raise PrereqError("cannot parse output of vgs (%s)" % result.stdout)
+  else:
+    lv_count = vg_size = vg_free = None
+
+  return not result.failed, lv_count, vg_size, vg_free
+
+
+def CheckSysDev(name, devnum):
+  """Checks consistency between /sys and /dev trees.
+
+  In /sys/block/<name>/dev and /sys/block/<name>/<part>/dev are the
+  kernel-known device numbers. The /dev/<name> block/char devices are
+  created by userspace and thus could differ from the kernel
+  view. This function checks the consistency between the device number
+  read from /sys and the actual device number in /dev.
+
+  Note that since the system could be using udev which removes and
+  recreates the device nodes on partition table rescan, we need to do
+  some retries here. Since we only do a stat, we can afford to do many
+  short retries.
+
+  Args:
+   name: the device name, e.g. 'sda'
+   devnum: the device number, e.g. 0x803 (2051 in decimal) for sda3
+
+  Returns:
+    None; failure of the check is signalled by raising a
+      SysconfigError exception
+  """
+
+  path = "/dev/%s" % name
+  for retries in range(40):
+    if os.path.exists(path):
+      break
+    time.sleep(0.250)
+  else:
+    raise SysconfigError("the device file %s does not exist, but the block "
+                         "device exists in the /sys/block tree" % path)
+  rdev = os.stat(path).st_rdev
+  if devnum != rdev:
+    raise SysconfigError("For device %s, the major:minor in /dev is %04x "
+                         "while the major:minor in sysfs is %s" %
+                         (path, rdev, devnum))
+
+
+def ReadDev(syspath):
+  """Reads the device number from a sysfs path.
+
+  The device number is given in sysfs under a block device directory
+  in a file named 'dev' which contains major:minor (in ASCII). This
+  function reads that file and converts the major:minor pair to a dev
+  number.
+
+  Args:
+    syspath: the path to a block device dir in sysfs, e.g. /sys/block/sda
+
+  Returns:
+    the device number
+  """
+
+  if not os.path.exists("%s/dev" % syspath):
+    raise ProgrammingError("Invalid path passed to ReadDev: %s" % syspath)
+  f = open("%s/dev" % syspath)
+  data = f.read().strip()
+  f.close()
+  major, minor = data.split(":", 1)
+  major = int(major)
+  minor = int(minor)
+  dev = os.makedev(major, minor)
+  return dev
+
+
+def ReadSize(syspath):
+  """Reads the size from a sysfs path.
+
+  The size is given in sysfs under a block device directory in a file
+  named 'size' which contains the number of sectors (in ASCII). This
+  function reads that file and converts the number in sectors to the
+  size in bytes.
+
+  Args:
+    syspath: the path to a block device dir in sysfs, e.g. /sys/block/sda
+
+  Returns:
+    the device size in bytes
+  """
+
+  if not os.path.exists("%s/size" % syspath):
+    raise ProgrammingError("Invalid path passed to ReadSize: %s" % syspath)
+  f = open("%s/size" % syspath)
+  data = f.read().strip()
+  f.close()
+  size = 512L * int(data)
+  return size
+
+
+def ReadPV(name):
+  """Reads physical volume information.
+
+  This function tries to see if a block device is a physical volume.
+
+  Args:
+    dev: the device name (e.g. sda)
+  Returns:
+    The name of the volume group to which this PV belongs, or
+    "" if this PV is not in use, or
+    None if this is not a PV
+  """
+
+  result = ExecCommand("pvdisplay -c /dev/%s" % name)
+  if result.failed:
+    return None
+  vgname = result.stdout.strip().split(":")[1]
+  return vgname
+
+
+def GetDiskList():
+  """Computes the block device list for this system.
+
+  This function examines the /sys/block tree and using information
+  therein, computes the status of the block device.
+
+  Returns:
+    [(name, size, dev, partitions, inuse), ...]
+  where:
+    name is the block device name (e.g. sda)
+    size the size in bytes
+    dev  the device number (e.g. 8704 for hdg)
+    partitions is [(name, size, dev), ...] mirroring the disk list data
+    inuse is a boolean showing the in-use status of the disk, computed as the
+      possibility of re-reading the partition table (the meaning of the
+      operation varies with the kernel version, but is usually accurate;
+      a mounted disk/partition or swap-area or PV with active LVs on it
+      is busy)
+  """
+
+  dlist = []
+  for name in os.listdir("/sys/block"):
+    if (not name.startswith("hd") and
+        not name.startswith("sd") and
+	not name.startswith("ubd")):
+      continue
+
+    size = ReadSize("/sys/block/%s" % name)
+
+    f = open("/sys/block/%s/removable" % name)
+    removable = int(f.read().strip())
+    f.close()
+
+    if removable:
+      continue
+
+    dev = ReadDev("/sys/block/%s" % name)
+    CheckSysDev(name, dev)
+    inuse = not CheckReread(name)
+    # Enumerate partitions of the block device
+    partitions = []
+    for partname in os.listdir("/sys/block/%s" % name):
+      if not partname.startswith(name):
+        continue
+      partdev = ReadDev("/sys/block/%s/%s" % (name, partname))
+      partsize = ReadSize("/sys/block/%s/%s" % (name, partname))
+      CheckSysDev(partname, partdev)
+      partitions.append((partname, partsize, partdev))
+    partitions.sort()
+    dlist.append((name, size, dev, partitions, inuse))
+  dlist.sort()
+  return dlist
+
+
+def GetMountInfo():
+  """Reads /proc/mounts and computes the mountpoint-devnum mapping.
+
+  This function reads /proc/mounts, finds the mounted filesystems
+  (excepting a hard-coded blacklist of network and virtual
+  filesystems) and does a stat on these mountpoints. The st_dev number
+  of the results is memorised for later matching against the
+  /sys/block devices.
+
+  Returns:
+   a mountpoint: device number dictionary
+  """
+
+  f = open("/proc/mounts", "r")
+  mountlines = f.readlines()
+  f.close()
+  mounts = {}
+  for line in mountlines:
+    device, mountpoint, fstype, rest = line.split(None, 3)
+    # fs type blacklist
+    if fstype in ["nfs", "nfs4", "autofs", "tmpfs", "proc", "sysfs"]:
+      continue
+    try:
+      dev = os.stat(mountpoint).st_dev
+    except OSError, err:
+      # this should be a fairly rare error, since we are blacklisting
+      # network filesystems; with this in mind, we'll ignore it,
+      # since the rereadpt check catches in-use filesystems,
+      # and this is used for disk information only
+      print >> sys.stderr, ("Can't stat mountpoint '%s': %s" %
+                            (mountpoint, err))
+      print >> sys.stderr, "Ignoring."
+      continue
+    mounts[dev] = mountpoint
+  return mounts
+
+
+def DevInfo(name, dev, mountinfo):
+  """Computes miscellaneous informations about a block device.
+
+  Args:
+    name: the device name, e.g. sda
+
+  Returns:
+    (mpath, whatvg, fileinfo), where
+    mpath is the mount path where this device is mounted or None
+    whatvg is the result of the ReadPV function
+    fileinfo is the output of file -bs on the device
+  """
+
+  if dev in mountinfo:
+    mpath = mountinfo[dev]
+  else:
+    mpath = None
+
+  whatvg = ReadPV(name)
+
+  result = ExecCommand("file -bs /dev/%s" % name)
+  if result.failed:
+    fileinfo = "<error: %s>" % result.stderr
+  fileinfo = result.stdout[:45]
+  return mpath, whatvg, fileinfo
+
+
+def ShowDiskInfo():
+  """Shows a nicely formatted block device list for this system.
+
+  This function shows the user a table with the informations gathered
+  by the other functions defined, in order to help the user make a
+  choice about which disks should be allocated to our volume group.
+
+  """
+  mounts = GetMountInfo()
+  dlist = GetDiskList()
+
+  print "------- Disk information -------"
+  print ("%5s %7s %4s %5s %-10s %s" %
+         ("Name", "Size[M]", "Used", "Mount", "LVM?", "Info"))
+
+  flatlist = []
+  # Flatten the [(disk, [partition,...]), ...] list
+  for name, size, dev, parts, inuse in dlist:
+    if inuse:
+      str_inuse = "yes"
+    else:
+      str_inuse = "no"
+    flatlist.append((name, size, dev, str_inuse))
+    for partname, partsize, partdev in parts:
+      flatlist.append((partname, partsize, partdev, ""))
+
+  for name, size, dev, in_use in flatlist:
+    mp, vgname, fileinfo = DevInfo(name, dev, mounts)
+    if mp is None:
+      mp = "-"
+    if vgname is None:
+      lvminfo = "-"
+    elif vgname == "":
+      lvminfo = "yes,free"
+    else:
+      lvminfo = "in %s" % vgname
+
+    if len(name) > 3:
+      # Indent partitions
+      name = " %s" % name
+    print ("%-5s %7.2f %-4s %-5s %-10s %s" %
+           (name, float(size) / 1024 / 1024, in_use, mp, lvminfo, fileinfo))
+
+
+def CheckReread(name):
+  """Check to see if a block device is in use.
+
+  Uses blockdev to reread the partition table of a block device, and
+  thus compute the in-use status. See the discussion in GetDiskList
+  about the meaning of 'in use'.
+
+  Returns:
+    boolean, the in-use status of the device
+  """
+
+  for retries in range(3):
+    result = ExecCommand("blockdev --rereadpt /dev/%s" % name)
+    if not result.failed:
+      break
+    time.sleep(2)
+
+  return not result.failed
+
+
+def WipeDisk(name):
+  """Wipes a block device.
+
+  This function wipes a block device, by clearing and re-reading the
+  partition table. If not successful, it writes back the old partition
+  data, and leaves the cleanup to the user.
+
+  Args:
+    the device name (e.g. sda)
+  """
+
+  if not CheckReread(name):
+    raise OperationalError("CRITICAL: disk %s you selected seems to be in "
+                           "use. ABORTING!" % name)
+
+  fd = os.open("/dev/%s" % name, os.O_RDWR | os.O_SYNC)
+  olddata = os.read(fd, 512)
+  if len(olddata) != 512:
+    raise OperationalError("CRITICAL: Can't read partition table information "
+                           "from /dev/%s (needed 512 bytes, got %d" %
+                           (name, len(olddata)))
+  newdata = "\0" * 512
+  os.lseek(fd, 0, 0)
+  bytes_written = os.write(fd, newdata)
+  os.close(fd)
+  if bytes_written != 512:
+    raise OperationalError("CRITICAL: Can't write partition table information"
+                           " to /dev/%s (tried to write 512 bytes, written "
+                           "%d. I don't know how to cleanup. Sorry." %
+                           (name, bytes_written))
+
+  if not CheckReread(name):
+    fd = os.open("/dev/%s" % name, os.O_RDWR | os.O_SYNC)
+    os.write(fd, olddata)
+    os.close(fd)
+    raise OperationalError("CRITICAL: disk %s which I have just wiped cannot "
+                           "reread partition table. Most likely, it is "
+                           "in use. You have to clean after this yourself. "
+                           "I tried to restore the old partition table, "
+                           "but I cannot guarantee nothing has broken." %
+                           name)
+
+
+def PartitionDisk(name):
+  """Partitions a disk.
+
+  This function creates a single partition spanning the entire disk,
+  by means of fdisk.
+
+  Args:
+    the device name, e.g. sda
+  """
+  result = ExecCommand(
+    'echo ,,8e, | sfdisk /dev/%s' % name)
+  if result.failed:
+    raise OperationalError("CRITICAL: disk %s which I have just partitioned "
+                           "cannot reread its partition table, or there "
+                           "is some other sfdisk error. Likely, it is in "
+                           "use. You have to clean this yourself. Error "
+                           "message from sfdisk: %s" %
+                           (name, result.output))
+
+
+def CreatePVOnDisk(name):
+  """Creates a physical volume on a block device.
+
+  This function creates a physical volume on a block device, overriding
+  all warnings. So it can wipe existing PVs and PVs which are in a VG.
+
+  Args:
+    the device name, e.g. sda
+
+  """
+  result = ExecCommand("pvcreate -yff /dev/%s1 " % name)
+  if result.failed:
+    raise OperationalError("I cannot create a physical volume on "
+                           "partition /dev/%s1. Error message: %s. "
+                           "Please clean up yourself." %
+                           (name, result.output))
+
+
+def CreateVG(vgname, disks):
+  """Creates the volume group.
+
+  This function creates a volume group named `vgname` on the disks
+  given as parameters. The physical extent size is set to 64MB.
+
+  Args:
+    disks: a list of disk names, e.g. ['sda','sdb']
+
+  """
+  pnames = ["'/dev/%s1'" % disk for disk in disks]
+  result = ExecCommand("vgcreate -s 64MB '%s' %s" % (vgname, " ".join(pnames)))
+  if result.failed:
+    raise OperationalError("I cannot create the volume group %s from "
+                           "disks %s. Error message: %s. Please clean up "
+                           "yourself." %
+                           (vgname, " ".join(disks), result.output))
+
+
+def ValidateDiskList(options):
+  """Validates or computes the disk list for create.
+
+  This function either computes the available disk list (if the user
+  gave --alldisks option), or validates the user-given disk list (by
+  using the --disks option) such that all given disks are present and
+  not in use.
+
+  Args:
+    the options returned from OptParser.parse_options
+
+  Returns:
+    a list of disk names, e.g. ['sda', 'sdb']
+  """
+
+  sysdisks = GetDiskList()
+  if not sysdisks:
+    raise PrereqError("no disks found (I looked for "
+                      "non-removable block devices).")
+  sysd_free = []
+  sysd_used = []
+  for name, size, dev, part, used in sysdisks:
+    if used:
+      sysd_used.append(name)
+    else:
+      sysd_free.append(name)
+
+  if not sysd_free:
+    raise PrereqError("no free disks found! (%d in-use disks)" %
+                      len(sysd_used))
+  if options.alldisks:
+    disklist = sysd_free
+  elif options.disks:
+    disklist = options.disks.split(",")
+    for name in disklist:
+      if name in sysd_used:
+        raise ParameterError("disk %s is in use, cannot wipe!" % name)
+      if name not in sysd_free:
+        raise ParameterError("cannot find disk %s!" % name)
+  else:
+    raise ParameterError("Please use either --alldisks or --disks!")
+
+  return disklist
+
+def BootStrap():
+  """Actual main routine."""
+
+  CheckPrereq()
+
+  options, args = ParseOptions()
+  vgname = options.vgname
+  command = args.pop(0)
+  if command == "diskinfo":
+    ShowDiskInfo()
+    return
+  if command != "create":
+    Usage()
+
+  exists, lv_count, vg_size, vg_free = CheckVGExists(vgname)
+  if exists:
+    raise PrereqError("It seems volume group '%s' already exists:\n"
+                      "  LV count: %s, size: %s, free: %s." %
+                      (vgname, lv_count, vg_size, vg_free))
+
+
+  disklist = ValidateDiskList(options)
+
+  for disk in disklist:
+    WipeDisk(disk)
+    PartitionDisk(disk)
+  for disk in disklist:
+    CreatePVOnDisk(disk)
+  CreateVG(vgname, disklist)
+
+  status, lv_count, size, free = CheckVGExists(vgname)
+  if status:
+    print "Done! %s: size %s GiB, disks: %s" % (vgname, size,
+                                                ",".join(disklist))
+  else:
+    raise OperationalError("Although everything seemed ok, the volume "
+                           "group did not get created.")
+
+
+def main():
+  """application entry point.
+
+  This is just a wrapper over BootStrap, to handle our own exceptions.
+  """
+
+  try:
+    BootStrap()
+  except PrereqError, err:
+    print >> sys.stderr, "The prerequisites for running this tool are not met."
+    print >> sys.stderr, ("Please make sure you followed all the steps in "
+                          "the build document.")
+    print >> sys.stderr, "Description: %s" % str(err)
+    sys.exit(1)
+  except SysconfigError, err:
+    print >> sys.stderr, ("This system's configuration seems wrong, at "
+                          "least is not what I expect.")
+    print >> sys.stderr, ("Please check that the installation didn't fail "
+                          "at some step.")
+    print >> sys.stderr, "Description: %s" % str(err)
+    sys.exit(1)
+  except ParameterError, err:
+    print >> sys.stderr, ("Some parameters you gave to the program or the "
+                          "invocation is wrong. ")
+    print >> sys.stderr, "Description: %s" % str(err)
+    Usage()
+  except OperationalError, err:
+    print >> sys.stderr, ("A serious error has happened while modifying "
+                          "the system's configuration.")
+    print >> sys.stderr, ("Please review the error message below and make "
+                          "sure you clean up yourself.")
+    print >> sys.stderr, ("It is most likely that the system configuration "
+                          "has been partially altered.")
+    print >> sys.stderr, str(err)
+    sys.exit(1)
+  except ProgrammingError, err:
+    print >> sys.stderr, ("Internal application error. Please signal this "
+                          "to xencluster-team.")
+    print >> sys.stderr, "Error description: %s" % str(err)
+    sys.exit(1)
+  except Error, err:
+    print >> sys.stderr, "Unhandled application error: %s" % err
+    sys.exit(1)
+  except (IOError, OSError), err:
+    print >> sys.stderr, "I/O error detected, please report."
+    print >> sys.stderr, "Description: %s" % str(err)
+    sys.exit(1)
+
+
+if __name__ == "__main__":
+  main()