**GENERAL¶
.DIRECT¶
Direct evaluation of two-electron integrals for Fock matrices (all two-electron integrals for other uses, e.g. CI, CCSD, MCSCF, are always evaluated directly, i.e. never read from disk).
The default is to evaluate LL, SL, and SS integrals directly (1 = evaluate directly; 0 = do not evaluate directly):
.DIRECT
1 1 1
.4INDEX¶
Activate the AO-to-MO transformation. By default, the transformation is called automatically when the correlation method is specified.
.SPHTRA¶
Transformation to spherical harmonics embedded in the transformation to orthonormal basis; totally symmetric contributions are deleted.
The default is a spherical transformation of large and small components, respectively (1 = on; 0 = off):
.SPHTRA
1 1
The transformation of the large components is a standard transformation from Cartesian functions to spherical harmonics. The transformation of the small components is modified, however, in accordance with the restricted kinetic balance relation.
.PCMOUT¶
Write MO coefficients to the formatted file DFPCMO. This is useful for porting coefficients between machines with different binary structure. For reading the DFPCMO file, there is no keyword - simply copy this file to the working directory.
.ACMOUT¶
Write coefficients in C1 symmetry to the unformatted file DFACMO.
.ACMOIN¶
Import coefficients in C1 symmetry from the unformatted file DFACMO to current symmetry. This assumes that the current symmetry is lower than the symmetry used for obtaining the original coefficients.
.LOWJACO¶
Use Jacobi diagonalization in the Löwdin orthogonalization procedure (subroutine LOWGEN). This is much slower than the default Householder method but does not mix AOs in the case of block-diagonal overlap matrix.
.DOJACO¶
Use the Jacobi method for matrix diagonalization (currently limited to real matrices). The default Householder method is generally more efficient, but may mix degenerate eigenvectors of different symmetries.
.QJACO¶
Employ pure Jacobi diagonalization of quaternion matrixes. Properly handles degenerate eigenvectors. Slower than .DOJACO and exclusive. Experimental option.
.LINDEP¶
Thresholds for linear dependence in large and small components; refer to the smallest acceptable values of eigenvalues of the overlap matrix. The default is:
.LINDEP
1.0D-6 1.0D-8
.RKBIMP¶
Import SCF coefficients calculated using restricted kinetic balance (RKB) and add the UKB component (all small component). This option is a nice way to get (unrestricted) magnetic balance in response calculations involving a uniform magnetic field (e.g. NMR shielding and magnetizability), in particular when combined with London orbitals, which makes the magnetic balance atomic.
.PRJTHR¶
RKBIMP projects out the RKB coefficients transformed to orthonormal basis and then adds the remained, corresponding to the UKB complement. With the keyword you can set the threshold for projection. The default is:
.PRJTHR
1.0D-5
.PRINT¶
General print level. The higher the number is, the more output the user gets. Option of this type is useful for code debugging. By default set to:
.PRINT
0
.LOGMEM¶
Write out a line in the output for each memory allocation done by DIRAC. This is mainly useful for programmers or for solving out-of-memory issues.
.NOSET¶
Warning
documentation missing
.FAMILY¶
Warning
documentation missing
.SKIP2E¶
Warning
documentation missing
*PARALLEL¶
Parallelization directives.
.PRINT¶
Print level for the parallel calculation. Default:
.PRINT
0
A print level of at least 2 is needed in order to be able to evaluate the parallelization efficiency. A complete timing for all nodes will be given if the print level is 4 or higher.
.NTASK¶
Number of tasks to send to each node when distributing the calculation of two-electron integrals. Default:
.NTASK
1
A task is defined as a shell of atomic integrals, a shell being an input block. One may therefore increase the number of shells given to each node in order to reduce the amount of communication. However, the program uses dynamical allocation of work to each node, and thus this option should be used with some care, as too large tasks may cause the dynamical load balancing to fail, giving an overall decrease in efficiency. The parallelization is also very coarse grained, so that the amount of communication seldom represents any significant problem.