Advanced Usage: Running in parallel
To simplify the parallelization of algorithms when using the package, we export some preliminary methods based on the MPI.jl wrapper. For further information on how to set up MPI with Julia see https://github.com/JuliaParallel/MPI.jl.
using MatsubaraFunctions
using MPI
MPI.Init()
mpi_info()
mpi_println("I print on main.")
ismain = mpi_ismain() # ismain = true if rank is 0
T = 1.0
N_= 128
g = MatsubaraGrid(T, N_, Fermion)
f = MatsubaraFunction(g, 1)
# simple loop parallelization for UnitRange
for vidx in mpi_split(1 : length(g))
println("My rank is $(mpi_rank()): $(vidx)")
end
# simple (+) allreduce
mpi_allreduce!(f)
In addition, calls of MatsubaraSymmetryGroup
with an initialization function have an opt-in switch (mode
) to enable parallel evaluation of the MatsubaraInitFunction
(by default mode = :serial
). If mode = :polyester
, shared memory multithreading via the Polyester
(https://github.com/JuliaSIMD/Polyester.jl) Julia package is used. This mode is recommended for initialization functions that are cheap to evaluate and are unlikely to benefit from Threads.@threads
due to the overhead from invoking the Julia scheduler. For more expensive functions, users can choose between mode = :threads
, which simply uses Threads.@threads
, and mode = :hybrid
. The latter combines both MPI and native Julia threads and can therefore be used to run calculations on multiple compute nodes.
Functions
MatsubaraFunctions.mpi_comm
— Functionfunction mpi_comm() :: MPI.Comm
Return the MPI communicator
MatsubaraFunctions.mpi_rank
— Functionfunction mpi_rank() :: Int64
Return the current MPI rank
MatsubaraFunctions.mpi_size
— Functionfunction mpi_size() :: Int64
Return the size of the MPI communicator
MatsubaraFunctions.mpi_split
— Functionfunction mpi_split(
r :: UnitRange{Int64}
) :: UnitRange{Int64}
Splits UnitRange
evenly among available MPI ranks (including main). Can, for example, be used to parallelize loops.
MatsubaraFunctions.mpi_allreduce!
— Functionfunction mpi_allreduce!(
f :: MatsubaraFunction{GD, SD, DD, Q}
) :: Nothing where {GD, SD, DD, Q <: Number}
Inplace MPI reduction (+) for MatsubaraFunction
MatsubaraFunctions.mpi_ismain
— Functionmpi_ismain() :: Bool
Returns true for MPI rank 0
MatsubaraFunctions.mpi_println
— Functionfunction mpi_println(
s :: String
) :: Nothing
Print string s
on MPI rank 0
MatsubaraFunctions.mpi_info
— Functionmpi_info() :: Nothing
Print information about available resources
MatsubaraFunctions.mpi_barrier
— Functionmpi_barrier() :: Nothing
Place synchronization barrier for MPI ranks