3. Basic Usage

In abICS, physical quantities by solver (VASP, QE, etc.) are calculated while updating the position coordinates for each Monte Carlo step. Therefore, it is necessary to prepare information other than the position coordinates in advance. This information is obtained by referring to the file according to the solver input format.

3.1. Prepare a reference file

Prepare an input file according to the input format of the solver to be used. The path of the reference file is specified by base_input_dir in the [solver] section in the abICS input file. The coordinate information does not need to be described because it refers to the abICS input file. The following is an example of a QE reference file.

&CONTROL
  calculation = 'scf'
  tstress = .false.
  tprnfor = .false.
  pseudo_dir = '~/qe/pot'
  disk_io = 'low'
  wf_collect = .false.
/
&SYSTEM
  ecutwfc      =  60.0
  occupations  = "smearing"
  smearing     = "gauss"
  degauss      = 0.01
/
&electrons
  mixing_beta = 0.7
  conv_thr = 1.0d-8
  electron_maxstep = 100
/
ATOMIC_SPECIES
Al 26.981 Al.pbesol-nl-kjpaw_psl.1.0.0.UPF
Mg 24.305 Mg.pbesol-spnl-kjpaw_psl.1.0.0.UPF
O  16.000 O.pbesol-n-kjpaw_psl.1.0.0.UPF
ATOMIC_POSITIONS crystal

K_POINTS automatic
1 1 1 0 0 0

3.2. Make an input file of abICS

The input file of abICS is constructed by the following four sections:

  1. [replica] section Specify the parameters of the replica exchange Monte Carlo part, such as the number of replicas, the temperature range, and the number of Monte Carlo steps.

  2. [solver] section Specify the parameters for the (first principle calculation) solver, including the type of solver (VASP, QE,…), the path to the solver, and the directory containing immutable input files.

  3. [observer] section Specify the type of physical quantity to be calculated.

  4. [config] section Specify the configuration of the alloy, etc.

For details, see Input Files Format . The following is an example of an input file selecting QE as a solver.

[replica]
nreplicas = 2
nprocs_per_replica = 1

kTstart = 1000.0
kTend = 1200.0

nsteps = 2  # Number of steps for sampling
RXtrial_frequency = 1
sample_frequency = 1
print_frequency = 1

[solver]
type = 'qe'
path= './pw.x'
base_input_dir = './baseinput'
perturb = 0.0
run_scheme = 'mpi_spawn'

[config]
unitcell = [[8.1135997772, 0.0000000000, 0.0000000000],
            [0.0000000000, 8.1135997772, 0.0000000000],
            [0.0000000000, 0.0000000000, 8.1135997772]]
supercell = [1,1,1]

[[config.base_structure]]
type = "O"
coords = [
     [0.237399980, 0.237399980, 0.237399980],
     [0.762599945, 0.762599945, 0.762599945],
     [0.512599945, 0.012600004, 0.737399936],
     [0.487399966, 0.987399936, 0.262599975],
     [0.012600004, 0.737399936, 0.512599945],
     [0.987399936, 0.262599975, 0.487399966],
     [0.737399936, 0.512599945, 0.012600004],
     [0.262599975, 0.487399966, 0.987399936],
     [0.987399936, 0.487399966, 0.262599975],
     [0.012600004, 0.512599945, 0.737399936],
     [0.487399966, 0.262599975, 0.987399936],
     [0.512599945, 0.737399936, 0.012600004],
     [0.262599975, 0.987399936, 0.487399966],
     [0.737399936, 0.012600004, 0.512599945],
     [0.237399980, 0.737399936, 0.737399936],
     [0.762599945, 0.262599975, 0.262599975],
     [0.512599945, 0.512599945, 0.237399980],
     [0.487399966, 0.487399966, 0.762599945],
     [0.012600004, 0.237399980, 0.012600004],
     [0.987399936, 0.762599945, 0.987399936],
     [0.987399936, 0.987399936, 0.762599945],
     [0.012600004, 0.012600004, 0.237399980],
     [0.487399966, 0.762599945, 0.487399966],
     [0.512599945, 0.237399980, 0.512599945],
     [0.737399936, 0.237399980, 0.737399936],
     [0.262599975, 0.762599945, 0.262599975],
     [0.237399980, 0.512599945, 0.512599945],
     [0.762599945, 0.487399966, 0.487399966],
     [0.762599945, 0.987399936, 0.987399936],
     [0.237399980, 0.012600004, 0.012600004],
     [0.737399936, 0.737399936, 0.237399980],
     [0.262599975, 0.262599975, 0.762599945],
     ]

[[config.defect_structure]]
coords = [
     [0.000000000, 0.000000000, 0.000000000],
     [0.749999940, 0.249999985, 0.499999970],
     [0.249999985, 0.749999940, 0.499999970],
     [0.249999985, 0.499999970, 0.749999940],
     [0.749999940, 0.499999970, 0.249999985],
     [0.499999970, 0.749999940, 0.249999985],
     [0.499999970, 0.249999985, 0.749999940],
     [0.000000000, 0.499999970, 0.499999970],
     [0.749999940, 0.749999940, 0.000000000],
     [0.249999985, 0.249999985, 0.000000000],
     [0.249999985, 0.000000000, 0.249999985],
     [0.749999940, 0.000000000, 0.749999940],
     [0.499999970, 0.000000000, 0.499999970],
     [0.000000000, 0.749999940, 0.749999940],
     [0.000000000, 0.249999985, 0.249999985],
     [0.499999970, 0.499999970, 0.000000000],
     [0.374999970, 0.374999970, 0.374999970],
     [0.624999940, 0.624999940, 0.624999940],
     [0.374999970, 0.874999940, 0.874999940],
     [0.624999940, 0.124999993, 0.124999993],
     [0.874999940, 0.874999940, 0.374999970],
     [0.124999993, 0.124999993, 0.624999940],
     [0.874999940, 0.374999970, 0.874999940],
     [0.124999993, 0.624999940, 0.124999993],
     ]
[[config.defect_structure.groups]]
name = 'Al'
# species = ['Al']    # default
# coords = [[[0,0,0]]]  # default
num = 16
[[config.defect_structure.groups]]
name = 'Mg'
# species = ['Mg']    # default
# coords = [[[0,0,0]]]  # default
num = 8


[observer]
ignored_species = ['O']

3.3. Execution

The number of processes specified here must be greater than or equal to the number of replicas.

$ mpiexec -np 2 abics input.toml

This creates a directory named with the replica number under the current directory, and each replica runs the solver in it.

3.4. Tips for the number of MPI processes

abICS uses the MPI library function MPI_Comm_spawn to run the solver. This function executes another program on new MPI processes.

For example, consider that you have a parallel machine with 4 CPU cores per node and want to run two replicas and to invoke solvers on 4 CPU cores. If invoked as mpiexec -np2 abics input.toml, the replica control processes A and B are started on the first two cores of node 0, each starting a new four-parallel solver a and b. Then, solver a fills the remaining two cores in node 0 and the first two cores in node 1, and solver b is placed on the remaining two cores in node 1 and the first two cores in node 2. This causes inter-node communication within the solver and reduces performance.

By taking more of the initial process, you can align the processes and prevent unnecessary node stripping. In this example, mpiexec -np 4 abics input.toml will allow the replica control processes A and B to fill all the cores of the node 0, while solvers a and b fill nodes 1 and 2, respectively.

3.5. Comments on MPI implementation

In the MPI_Comm_spawn function, a MPI implementation can use the information “how many process can be started in total” by MPI_UNIVERSE_SIZE. In this section, we will comment on some MPI implementations including how to set MPI_UNIVERSE_SIZE.

3.5.1. OpenMPI

MPI_UNIVERSE_SIZE is automatically set to the number of the CPU cores available. If you want more processes, you should pass the --oversubscribe option to the mpiexec command.

When one of the spawned process returns a nonzero return code, all the OpenMPI processes will abort. The --mca orte_abort_on_non_zero_status 0 option allows you to ignore the return code. Quantum ESPRESSO, for example, may return a nonzero value due to a floating point exception even if the calculation is successfully completed.

3.5.2. MPICH / Intel MPI

The -usize <num> option set MPI_UNIVERSE_SIZE. Indeed, MPICH and Intel MPI seem not to use this value in MPI_Comm_spawn.

3.5.3. HPE (SGI) MPT

The -up <num> option set MPI_UNIVERSE_SIZE. This must be set before the -np <num> option.

3.5.4. Others

On large supercomputers, the vendor may provide a dedicated MPI execution script along with the job scheduler. In this case, refer to the manuals. On the ISSP supercomputer systems, Sekirei and Enaga, for example, mpijob -spawn sets MPI_UNIVERSE_SIZE properly.