Is there an easy way to implement atomic integer operations (one sided) in mpi? last time I looked three years ago, example in mpi book was fairly complex to implement.
MPI one-sided is fairly complex, with about three (more like two-and-a-half) different mechanisms.
The first two modes are "active target synchronization" where the target (the process being targeted, the one doing the one-sided call is called the origin) explicitly declares an epoch during which its window (the "shared" area) is exposed. You then have a distinction between this epoch being collectively declared (MPI_Win_fence) and it being local to a group (MPI_Win_start/post/wait/complete calls).
Something close to true one-sided is done with the MPI_Win_lock/unlock calls where the origin locks the "shared" area on the target to get exclusive access to it. This is called "passive target synchronization" because the target is completely unaware of anything happening to its shared area; this requires a daemon or so to be running on the target.
Thus far the state of MPI-2. Unfortunately you could only read or write but not both in a lock/unlock epoch, so atomic fetch-and-whatever operations were not possible in a straightforward way. This was solved in MPI-3, which has the MPI_Fetch_and_op instruction.
For instance, if you use MPI_REPLACE you get a readout of the area in "shared" memory and overwrite it with something you specify. That is enough to implement atomic operations.
MPI 3.0 added atomics. See https://www.mpi-forum.org/docs/mpi-3.1/mpi31-report/node272.htm for details.
MPI_Accumulate performs an atomic update on window data.MPI_Get_accumulate fetches the value and performs an update.MPI_Fetch_and_op is similar to MPI_Get_accumulate but is a shorthand function for the common case of a single element.MPI_Compare_and_swap does what the name suggests.See https://www.mpi-forum.org/docs/mpi-3.1/mpi31-report/node290.htm for details on the semantic guarantees of these functions.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With