:orphan:


How to run a Coupled-Cluster calculations in separate steps
===========================================================

Typically there are no problems in executing the three steps in a
coupled-cluster calculation with RELCCSD: 1) the Hartree-Fock calculation, 2)
the MOLTRA step, the integral transformation from atomic to molecular orbitals,
and 3) the coupled-cluster calculations itself all in one calculation.

However, in some cases it is not the best approach to perform all these in one
go. This is especially true for calculations that use a lot of memory in the
coupled-cluster part but are more economical in the previous steps (say, if the
system has little or no symmetry or, in Fock-Space calculations, if the active
space is large).

In such cases, it is possible to do one step, save the results necessary for
the next calculation, and then run the next step. This can considerably shorten
the total wall time spent on a parallel calculation, since a larger number of
MPI processes can be started for the SCF (and/or MOLTRA) step(s) with the same
amount of memory per compute node.


Step 1: SCF
-----------

From the SCF part, one has to save the ``DFCOEF`` file after a successful
calculation. If using the 2-component formalism, one can also save the files
``X2CMAT`` and ``AOMOMAT`` (and possibly ``AOPROPER``), in order to skip the
generation of the 4-component to 2-component transformation matrices. If your
system supports it, you could keep the scratch directory and then avoid the
copying back and forth of these files.

Currently DIRAC's execution script automatically retrieves these files and
places them in a gzip-compressed tar archive, so it is probably not necessary
to retrieve the files individually if this archive is kept.


Step 2: MOLTRA
--------------

MOLTRA will need the ``DFCOEF`` file, and also the one-electron Fock matrix (if
the 2-component methods are used and also in the case of an frozen density
embedding calculation, this will have to be re-generated, so that's why it is a
good idea to keep the ``X2CMAT`` and ``AOMOMAT`` files around.

The result of the MOLTRA step will be a set of files that start with
``MDC...``.  For ``SCHEME 4`` these files can be considered as 'independent' of
the number of processes used in the parallel run (because only one, created by
the master, will have information used by other modules like RELCCSD - such as
the number of spinors transformed etc.), so it does not matter how the
calculation from which they were obtained was performed.

This is not true for ``SCHEME 6``, because there the information is needed by
other modules, and then one has to be careful to use the same number of MPI
processes in the MOLTRA step and beyond.


Step 3: RELCCSD
---------------

RELCCSD will need the ``MDC*`` files generated by MOLTRA. One must be careful
in the input, however; the keyword ``.NO4INDEX`` has to be specified, and the
:ref:`**MOLTRA` section must not be present in the input.  Again, for ``SCHEME 6``
one must be careful to have the same number of MPI processes as in the MOLTRA
step.


A note on Fock-Space Calculations
---------------------------------

In the case of Fock-Space calculations, an additional trick can be used: if
previous/lower sectors are converged, RELCCSD can be restarted and be made skip
them. In order to do that one has to do the following apart from the actions
outlined above: First, instead of ``MAXIT=n``, use ``MAXITk=n`` (where k
denotes the sector: 00, 10, 01, 11, 02, 20) and set to zero all the ``MAXITab``
for converged sectors to zero. Then restart exactly the same calculation as
before (in terms of number of nodes etc.). It is important to have the
intermediate files for RELCCSD present in the scratch directory in this case.
The easiest way to do so is to keep the scratch directory from the previous
run.