Commit 9031805d authored by Petr Pudlak's avatar Petr Pudlak

Merge branch 'stable-2.15' into master

* stable-2.15
  (no changes)

* stable-2.14
  Move _ValidateConfig to the submodule
  Fix building of shell command in export
  Add test showing a bug in location score calculation
  Bugfix for cluster location score calculation

* stable-2.13
  Properly get rid of all watcher jobs
  Move stdout_of to qa_utils
  Describe --no-verify-disks option in watcher man page
  Make disk verification optional

* stable-2.12
  Tell git to ignore tools/ssl-update
  Use 'exclude_daemons' option for master only
  Disable superfluous restarting of daemons
  Add tests exercising the "crashed" state handling
  Add proper handling of the "crashed" Xen state
  Handle SSL setup when downgrading
  Write SSH ports to ssconf files
  Noded: Consider certificate chain in callback
  Cluster-keys-replacement: update documentation
  Backend: Use timestamp as serial no for server cert
  UPGRADE: add note about 2.12.5
  NEWS: Mention issue 1094
  man: mention changes in renew-crypto
  Verify: warn about self-signed client certs
  Bootstrap: validate SSL setup before starting noded
  Clean up configuration of curl request
  Renew-crypto: remove superflous copying of node certs
  Renew-crypto: propagate verbose and debug option
  Noded: log the certificate and digest on noded startup
  QA: reload rapi cert after renew crypto
  Prepare-node-join: use common functions
  Renew-crypto: remove dead code
  Init: add master client certificate to configuration
  Renew-crypto: rebuild digest map of all nodes
  Noded: make "bootstrap" a constant
  node-daemon-setup: generate client certificate
  tools: Move (Re)GenerateClientCert to common
  Renew cluster and client certificates together
  Init: create the master's client cert in bootstrap
  Renew client certs using ssl_update tool
  Run functions while (some) daemons are stopped
  Back up old client.pem files
  Introduce ssl_update tool
  x509 function for creating signed certs
  Add tools/ from 2.13
  Consider ECDSA in SSH setup
  Update documentation of watcher and RAPI daemon
  Watcher: add option for setting RAPI IP
  When connecting to Metad fails, log the full stack trace
  Set up the Metad client with allow_non_master
  Set up the configuration client properly on non-masters
  Add the 'allow_non_master' option to the WConfd RPC client
  Add the option to disable master checks to the RPC client
  Add 'allow_non_master' to the Luxi test transport class too
  Add 'allow_non_master' to FdTransport for compatibility
  Properly document all constructor arguments of Transport
  Allow the Transport class to be used for non-master nodes
  Don't define the set of all daemons twice

* stable-2.11
  Fix capitalization of TestCase
  Trigger renew-crypto on downgrade to 2.11


	  keep all the Haskell test data files
	  keep the auto-generated list of valid keys from master
	  merge the ssconf entry for ssh ports to the list of valid keys
	  keep the generated list of constructors from master
	  keep all tests
Signed-off-by: default avatarPetr Pudlak <>
Reviewed-by: default avatarKlaus Aehlig <>
parents d8e7d844 7f850407
......@@ -140,6 +140,7 @@
# scripts
......@@ -337,6 +337,7 @@ CLEANFILES = \
tools/vif-ganeti-metad \
tools/net-common \
tools/users-setup \
tools/ssl-update \
tools/vcluster-setup \
tools/prepare-node-join \
tools/ssh-update \
......@@ -605,12 +606,13 @@ rpc_stub_PYTHON = \
pytools_PYTHON = \
lib/tools/ \
lib/tools/ \
lib/tools/ \
lib/tools/ \
lib/tools/ \
lib/tools/ \
lib/tools/ \
lib/tools/ \
lib/tools/ \
lib/tools/ \
utils_PYTHON = \
......@@ -1294,7 +1296,8 @@ PYTHON_BOOTSTRAP = \
tools/node-cleanup \
tools/node-daemon-setup \
tools/prepare-node-join \
tools/ssh-update \
qa_scripts = \
qa/ \
......@@ -1535,7 +1538,8 @@ nodist_pkglib_python_scripts = \
tools/ensure-dirs \
tools/node-daemon-setup \
tools/prepare-node-join \
tools/ssh-update \
pkglib_python_basenames = \
$(patsubst daemons/%,%,$(patsubst tools/%,%,\
......@@ -1712,6 +1716,7 @@ TEST_FILES = \
test/data/htools/ \
test/data/htools/ \
test/data/htools/ \
test/data/htools/ \
test/data/htools/ \
test/data/htools/ \
test/data/htools/ \
......@@ -1867,6 +1872,7 @@ TEST_FILES = \
test/data/vgreduce-removemissing-2.02.66-ok.txt \
test/data/vgs-missing-pvs-2.02.02.txt \
test/data/vgs-missing-pvs-2.02.66.txt \
test/data/xen-xl-list-4.4-crashed-instances.txt \
test/data/xen-xm-info-4.0.1.txt \
test/data/xen-xm-list-4.0.1-dom0-only.txt \
test/data/xen-xm-list-4.0.1-four-instances.txt \
......@@ -2499,6 +2505,7 @@ tools/node-daemon-setup: MODULE =
tools/prepare-node-join: MODULE =
tools/ssh-update: MODULE =
tools/node-cleanup: MODULE =
tools/ssl-update: MODULE =
$(HS_BUILT_TEST_HELPERS): TESTROLE = $(patsubst test/hs/%,%,$@)
$(PYTHON_BOOTSTRAP) $(gnt_scripts) $(gnt_python_sbin_SCRIPTS): Makefile | stamp-directories
......@@ -67,6 +67,14 @@ to replace all SSH key pairs of non-master nodes' with the master node's SSH
key pair.
Due to issue #1094 in Ganeti 2.11 and 2.12 up to version 2.12.4, we
advise to rerun 'gnt-cluster renew-crypto --new-node-certificates'
after an upgrade to 2.12.5 or higher.
......@@ -24,13 +24,30 @@ don't forget to use "shred" to remove files securely afterwards).
Replacing SSL keys
The cluster SSL key is stored in ``/var/lib/ganeti/server.pem``.
The cluster-wide SSL key is stored in ``/var/lib/ganeti/server.pem``.
Besides that, since Ganeti 2.11, each node has an individual node
SSL key, which is stored in ``/var/lib/ganeti/client.pem``. This
client certificate is signed by the cluster-wide SSL certficate.
Run the following command to generate a new key::
To renew the individual node certificates, run this command::
gnt-cluster renew-crypto --new-node-certificates
Run the following command to generate a new cluster-wide certificate::
gnt-cluster renew-crypto --new-cluster-certificate
# Older version, which don't have this command, can instead use:
Note that this triggers both, the renewal of the cluster certificate
as well as the renewal of the individual node certificate. The reason
for this is that the node certificates are signed by the cluster
certificate and thus they need to be renewed and signed as soon as
the changes certificate changes. Therefore, the command above is
equivalent to::
gnt-cluster renew-crypto --new-cluster-certificate --new-node-certificates
On older versions, which don't have this command, use this instead::
chmod 0600 /var/lib/ganeti/server.pem &&
openssl req -new -newkey rsa:1024 -days 1825 -nodes \
-x509 -keyout /var/lib/ganeti/server.pem \
......@@ -42,6 +59,10 @@ Run the following command to generate a new key::
gnt-cluster command /etc/init.d/ganeti restart
Note that older versions don't have individual node certificates and thus
one does not have to handle the creation and distribution of them.
Replacing SSH keys
......@@ -942,6 +942,12 @@ def _VerifyNodeInfo(what, vm_capable, result, all_hvparams):
def _VerifyClientCertificate(cert_file=pathutils.NODED_CLIENT_CERT_FILE):
"""Verify the existance and validity of the client SSL certificate.
Also, verify that the client certificate is not self-signed. Self-
signed client certificates stem from Ganeti versions 2.12.0 - 2.12.4
and should be replaced by client certificates signed by the server
certificate. Hence we output a warning when we encounter a self-signed
create_cert_cmd = "gnt-cluster renew-crypto --new-node-certificates"
if not os.path.exists(cert_file):
......@@ -952,9 +958,13 @@ def _VerifyClientCertificate(cert_file=pathutils.NODED_CLIENT_CERT_FILE):
(errcode, msg) = utils.VerifyCertificate(cert_file)
if errcode is not None:
return (errcode, msg)
# if everything is fine, we return the digest to be compared to the config
return (None, utils.GetCertificateDigest(cert_filename=cert_file))
(errcode, msg) = utils.IsCertificateSelfSigned(cert_file)
if errcode is not None:
return (errcode, msg)
# if everything is fine, we return the digest to be compared to the config
return (None, utils.GetCertificateDigest(cert_filename=cert_file))
def _VerifySshSetup(node_status_list, my_name,
......@@ -1353,13 +1363,8 @@ def GetCryptoTokens(token_requests):
@return: list of tuples of the token type and the public crypto token
getents = runtime.GetEnts()
tokens = []
for (token_type, action, options) in token_requests:
for (token_type, action, _) in token_requests:
if token_type not in constants.CRYPTO_TYPES:
raise errors.ProgrammerError("Token type '%s' not supported." %
......@@ -1367,46 +1372,8 @@ def GetCryptoTokens(token_requests):
raise errors.ProgrammerError("Action '%s' is not supported." %
if token_type == constants.CRYPTO_TYPE_SSL_DIGEST:
if action == constants.CRYPTO_ACTION_CREATE:
# extract file name from options
cert_filename = None
if options:
cert_filename = options.get(constants.CRYPTO_OPTION_CERT_FILE)
if not cert_filename:
cert_filename = _DEFAULT_CERT_FILE
# For security reason, we don't allow arbitrary filenames
if not cert_filename in _VALID_CERT_FILES:
raise errors.ProgrammerError(
"The certificate file name path '%s' is not allowed." %
# extract serial number from options
serial_no = None
if options:
serial_no = int(options[constants.CRYPTO_OPTION_SERIAL_NO])
except ValueError:
raise errors.ProgrammerError(
"The given serial number is not an intenger: %s." %
except KeyError:
raise errors.ProgrammerError("No serial number was provided.")
if not serial_no:
raise errors.ProgrammerError(
"Cannot create an SSL certificate without a serial no.")
True, cert_filename, serial_no,
"Create new client SSL certificate in %s." % cert_filename,
uid=getents.masterd_uid, gid=getents.masterd_gid)
elif action == constants.CRYPTO_ACTION_GET:
return tokens
......@@ -4859,9 +4826,11 @@ def CreateX509Certificate(validity, cryptodir=pathutils.CRYPTO_KEYS_DIR):
@return: Certificate name and public part
serial_no = int(time.time())
(key_pem, cert_pem) = \
min(validity, _MAX_SSL_CERT_VALIDITY), 1)
min(validity, _MAX_SSL_CERT_VALIDITY),
cert_dir = tempfile.mkdtemp(dir=cryptodir,
prefix="x509-%s-" % utils.TimestampForFilename())
......@@ -4950,12 +4919,13 @@ def _GetImportExportIoCommand(instance, mode, ieio, ieargs):
elif ieio == constants.IEIO_RAW_DISK:
(disk, ) = ieargs
real_disk = _OpenRealBD(disk)
if mode == constants.IEM_IMPORT:
suffix = utils.BuildShellCmd("| %s", disk.Import())
suffix = "| %s" % utils.ShellQuoteArgs(real_disk.Import())
elif mode == constants.IEM_EXPORT:
prefix = utils.BuildShellCmd("%s |", disk.Export())
prefix = "%s |" % utils.ShellQuoteArgs(real_disk.Export())
exp_size = disk.size
elif ieio == constants.IEIO_SCRIPT:
......@@ -79,10 +79,12 @@ def GenerateHmacKey(file_name):
# pylint: disable=R0913
def GenerateClusterCrypto(new_cluster_cert, new_rapi_cert, new_spice_cert,
new_confd_hmac_key, new_cds,
new_confd_hmac_key, new_cds, new_client_cert,
rapi_cert_pem=None, spice_cert_pem=None,
spice_cacert_pem=None, cds=None,
......@@ -100,6 +102,10 @@ def GenerateClusterCrypto(new_cluster_cert, new_rapi_cert, new_spice_cert,
@param new_confd_hmac_key: Whether to generate a new HMAC key
@type new_cds: bool
@param new_cds: Whether to generate a new cluster domain secret
@type new_client_cert: bool
@param new_client_cert: Whether to generate a new client certificate
@type master_name: string
@param master_name: FQDN of the master node
@type rapi_cert_pem: string
@param rapi_cert_pem: New RAPI certificate in PEM format
@type spice_cert_pem: string
......@@ -127,6 +133,12 @@ def GenerateClusterCrypto(new_cluster_cert, new_rapi_cert, new_spice_cert,
new_cluster_cert, nodecert_file, 1,
"Generating new cluster certificate at %s" % nodecert_file)
# If the cluster certificate was renewed, the client cert has to be
# renewed and resigned.
if new_cluster_cert or new_client_cert:
utils.GenerateNewClientSslCert(clientcert_file, nodecert_file,
# confd HMAC key
if new_confd_hmac_key or not os.path.exists(hmackey_file):
logging.debug("Writing new confd HMAC key to %s", hmackey_file)
......@@ -177,7 +189,7 @@ def GenerateClusterCrypto(new_cluster_cert, new_rapi_cert, new_spice_cert,
def _InitGanetiServerSetup(master_name):
def _InitGanetiServerSetup(master_name, cfg):
"""Setup the necessary configuration for the initial node daemon.
This creates the nodepass file containing the shared password for
......@@ -185,11 +197,34 @@ def _InitGanetiServerSetup(master_name):
@type master_name: str
@param master_name: Name of the master node
@type cfg: ConfigWriter
@param cfg: the configuration writer
# Generate cluster secrets
GenerateClusterCrypto(True, False, False, False, False)
GenerateClusterCrypto(True, False, False, False, False, False, master_name)
# Add the master's SSL certificate digest to the configuration.
master_uuid = cfg.GetMasterNode()
master_digest = utils.GetCertificateDigest()
cfg.AddNodeToCandidateCerts(master_uuid, master_digest)
cfg.Update(cfg.GetClusterInfo(), logging.error)
if not os.path.exists(os.path.join(pathutils.DATA_DIR,
"%s%s" % (constants.SSCONF_FILEPREFIX,
raise errors.OpExecError("Ssconf file for master candidate certificates"
" was not written.")
if not os.path.exists(pathutils.NODED_CERT_FILE):
raise errors.OpExecError("The server certficate was not created properly.")
if not os.path.exists(pathutils.NODED_CLIENT_CERT_FILE):
raise errors.OpExecError("The client certificate was not created"
" properly.")
# set up the inter-node password and certificate
result = utils.RunCmd([pathutils.DAEMON_UTIL, "start", constants.NODED])
if result.failed:
raise errors.OpExecError("Could not start the node daemon, command %s"
......@@ -780,7 +815,7 @@ def InitCluster(cluster_name, mac_prefix, # pylint: disable=R0913, R0914
if modify_ssh_setup:
# set up the inter-node password and certificate
_InitGanetiServerSetup(, cfg)
logging.debug("Starting daemons")
result = utils.RunCmd([pathutils.DAEMON_UTIL, "start-all"])
......@@ -897,6 +932,7 @@ def SetupNodeDaemon(opts, cluster_name, node, ssh_port):
constants.NDS_SSCONF: ssconf.SimpleStore().ReadAll(),
constants.NDS_START_NODE_DAEMON: True,
constants.NDS_NODE_NAME: node,
ssh.RunSshCmdWithStdin(cluster_name, node, pathutils.NODE_DAEMON_SETUP,
......@@ -80,6 +80,7 @@ __all__ = [
......@@ -1460,12 +1461,13 @@ def GenericInstanceCreate(mode, opts, args):
return 0
class _RunWhileClusterStoppedHelper(object):
"""Helper class for L{RunWhileClusterStopped} to simplify state management
class _RunWhileDaemonsStoppedHelper(object):
"""Helper class for L{RunWhileDaemonsStopped} to simplify state management
def __init__(self, feedback_fn, cluster_name, master_node,
online_nodes, ssh_ports):
online_nodes, ssh_ports, exclude_daemons, debug,
"""Initializes this class.
@type feedback_fn: callable
......@@ -1478,6 +1480,13 @@ class _RunWhileClusterStoppedHelper(object):
@param online_nodes: List of names of online nodes
@type ssh_ports: list
@param ssh_ports: List of SSH ports of online nodes
@type exclude_daemons: list of string
@param exclude_daemons: list of daemons that will be restarted on master
after all others are shutdown
@type debug: boolean
@param debug: show debug output
@type verbose: boolesn
@param verbose: show verbose output
self.feedback_fn = feedback_fn
......@@ -1491,6 +1500,10 @@ class _RunWhileClusterStoppedHelper(object):
self.nonmaster_nodes = [name for name in online_nodes
if name != master_node]
self.exclude_daemons = exclude_daemons
self.debug = debug
self.verbose = verbose
assert self.master_node not in self.nonmaster_nodes
def _RunCmd(self, node_name, cmd):
......@@ -1542,6 +1555,12 @@ class _RunWhileClusterStoppedHelper(object):
for node_name in self.online_nodes:
self.feedback_fn("Stopping daemons on %s" % node_name)
self._RunCmd(node_name, [pathutils.DAEMON_UTIL, "stop-all"])
# Starting any daemons listed as exception
if node_name == self.master_node:
for daemon in self.exclude_daemons:
self.feedback_fn("Starting daemon '%s' on %s" % (daemon,
self._RunCmd(node_name, [pathutils.DAEMON_UTIL, "start", daemon])
# All daemons are shut down now
......@@ -1554,18 +1573,33 @@ class _RunWhileClusterStoppedHelper(object):
# Start cluster again, master node last
for node_name in self.nonmaster_nodes + [self.master_node]:
# Stopping any daemons listed as exception.
# This might look unnecessary, but it makes sure that daemon-util
# starts all daemons in the right order.
if node_name == self.master_node:
for daemon in self.exclude_daemons:
self.feedback_fn("Stopping daemon '%s' on %s" % (daemon,
self._RunCmd(node_name, [pathutils.DAEMON_UTIL, "stop", daemon])
self.feedback_fn("Starting daemons on %s" % node_name)
self._RunCmd(node_name, [pathutils.DAEMON_UTIL, "start-all"])
# Resume watcher
def RunWhileClusterStopped(feedback_fn, fn, *args):
def RunWhileDaemonsStopped(feedback_fn, exclude_daemons, fn, *args, **kwargs):
"""Calls a function while all cluster daemons are stopped.
@type feedback_fn: callable
@param feedback_fn: Feedback function
@type exclude_daemons: list of string
@param exclude_daemons: list of daemons that stopped, but immediately
restarted on the master to be available when calling
'fn'. If None, all daemons will be stopped and none
will be started before calling 'fn'.
@type fn: callable
@param fn: Function to be called when daemons are stopped
......@@ -1585,9 +1619,27 @@ def RunWhileClusterStopped(feedback_fn, fn, *args):
del cl
assert master_node in online_nodes
if exclude_daemons is None:
exclude_daemons = []
debug = kwargs.get("debug", False)
verbose = kwargs.get("verbose", False)
return _RunWhileDaemonsStoppedHelper(
feedback_fn, cluster_name, master_node, online_nodes, ssh_ports,
exclude_daemons, debug, verbose).Call(fn, *args)
def RunWhileClusterStopped(feedback_fn, fn, *args):
"""Calls a function while all cluster daemons are stopped.
@type feedback_fn: callable
@param feedback_fn: Feedback function
@type fn: callable
@param fn: Function to be called when daemons are stopped
return _RunWhileClusterStoppedHelper(feedback_fn, cluster_name, master_node,
online_nodes, ssh_ports).Call(fn, *args)
RunWhileDaemonsStopped(feedback_fn, None, fn, *args)
def GenerateTable(headers, fields, separator, data,
......@@ -46,6 +46,7 @@ from ganeti.cli import *
from ganeti import bootstrap
from ganeti import compat
from ganeti import constants
from ganeti import config
from ganeti import errors
from ganeti import netutils
from ganeti import objects
......@@ -966,7 +967,8 @@ def _ReadAndVerifyCert(cert_filename, verify_private_key=False):
def _RenewCrypto(new_cluster_cert, new_rapi_cert, # pylint: disable=R0911
rapi_cert_filename, new_spice_cert, spice_cert_filename,
spice_cacert_filename, new_confd_hmac_key, new_cds,
cds_filename, force, new_node_cert, new_ssh_keys):
cds_filename, force, new_node_cert, new_ssh_keys,
verbose, debug):
"""Renews cluster certificates, keys and secrets.
@type new_cluster_cert: bool
......@@ -994,6 +996,10 @@ def _RenewCrypto(new_cluster_cert, new_rapi_cert, # pylint: disable=R0911
@param new_node_cert: Whether to generate new node certificates
@type new_ssh_keys: bool
@param new_ssh_keys: Whether to generate new node SSH keys
@type verbose: boolean
@param verbose: show verbose output
@type debug: boolean
@param debug: show debug output
ToStdout("Updating certificates now. Running \"gnt-cluster verify\" "
......@@ -1048,13 +1054,15 @@ def _RenewCrypto(new_cluster_cert, new_rapi_cert, # pylint: disable=R0911
return 1
def _RenewCryptoInner(ctx):
ctx.feedback_fn("Updating cluster-wide certificates and keys")
# Note: the node certificate will be generated in the LU
ctx.feedback_fn("Updating certificates and keys")
......@@ -1062,9 +1070,6 @@ def _RenewCrypto(new_cluster_cert, new_rapi_cert, # pylint: disable=R0911
files_to_copy = []
if new_cluster_cert:
if new_rapi_cert or rapi_cert_pem:
......@@ -1086,14 +1091,101 @@ def _RenewCrypto(new_cluster_cert, new_rapi_cert, # pylint: disable=R0911
for file_name in files_to_copy:
ctx.ssh.CopyFileToNode(node_name, port, file_name)
RunWhileClusterStopped(ToStdout, _RenewCryptoInner)
if new_node_cert or new_ssh_keys:
def _RenewClientCerts(ctx):
ctx.feedback_fn("Updating client SSL certificates.")
cluster_name = ssconf.SimpleStore().GetClusterName()
for node_name in ctx.nonmaster_nodes + [ctx.master_node]:
ssh_port = ctx.ssh_ports[node_name]
data = {
constants.NDS_CLUSTER_NAME: cluster_name,
constants.NDS_NODE_NAME: node_name,
# Create a temporary ssconf file using the master's client cert digest
# and the 'bootstrap' keyword to enable distribution of all nodes' digests.
master_digest = utils.GetCertificateDigest()
ssconf_master_candidate_certs_filename = os.path.join(
pathutils.DATA_DIR, "%s%s" %
data="%s=%s" % (constants.CRYPTO_BOOTSTRAP, master_digest))
for node_name in ctx.nonmaster_nodes:
port = ctx.ssh_ports[node_name]
ctx.feedback_fn("Copying %s to %s:%d" %
(ssconf_master_candidate_certs_filename, node_name, port))
ctx.ssh.CopyFileToNode(node_name, port,
# Write the boostrap entry to the config using wconfd.
config_live_lock = utils.livelock.LiveLock("renew_crypto")
cfg = config.GetConfig(None, config_live_lock)
cfg.AddNodeToCandidateCerts(constants.CRYPTO_BOOTSTRAP, master_digest)
cfg.Update(cfg.GetClusterInfo(), ctx.feedback_fn)
def _RenewServerAndClientCerts(ctx):
ctx.feedback_fn("Updating the cluster SSL certificate.")
master_name = ssconf.SimpleStore().GetMasterNode()
bootstrap.GenerateClusterCrypto(True, # cluster cert
False, # rapi cert
False, # spice cert
False, # confd hmac key
False, # cds
True, # client cert
for node_name in ctx.nonmaster_nodes:
port = ctx.ssh_ports[node_name]
server_cert = pathutils.NODED_CERT_FILE
ctx.feedback_fn("Copying %s to %s:%d" %
(server_cert, node_name, port))
ctx.ssh.CopyFileToNode(node_name, port, server_cert)
if new_rapi_cert or new_spice_cert or new_confd_hmac_key or new_cds:
RunWhileClusterStopped(ToStdout, _RenewCryptoInner)
# If only node certficates are recreated, call _RenewClientCerts only.
if new_node_cert and not new_cluster_cert:
RunWhileDaemonsStopped(ToStdout, [constants.NODED, constants.WCONFD],
_RenewClientCerts, verbose=verbose, debug=debug)
# If the cluster certificate are renewed, the client certificates need
# to be renewed too.
if new_cluster_cert:
RunWhileDaemonsStopped(ToStdout, [constants.NODED, constants.WCONFD],
_RenewServerAndClientCerts, verbose=verbose,
if new_node_cert or new_cluster_cert or new_ssh_keys:
cl = GetClient()
renew_op = opcodes.OpClusterRenewCrypto(node_certificates=new_node_cert,
renew_op = opcodes.OpClusterRenewCrypto(
node_certificates=new_node_cert or new_cluster_cert,