I am learning OpenMPI in C. I'm having a bit of trouble doing matrix multiplication with this program when I do it, the results are wrong. The program compiles , but I feel that my matrix multiplication algorithm is wrong somewhere.
My approach to solving this problem is to use MPI_Scatter to scatter matrix A then transpose matrix B. Then MPI_Scatter matrix B. Once they are scattered I do the calculation for matrix multiplication and Gather the result back to the root process. I'm not sure if I'm missing something, but I don't fully understand Scatter and Gather yet. I know with Send you can send to individual processes and Recv from different processes, but how does this work with Scatter and Gather. Let me know if I made a mistake somewhere in this code. Thanks.
My source code:
#define N 512
#include <stdio.h>
#include <math.h>
#include <mpi.h>
#include <sys/time.h>
print_results(char *prompt, float a[N][N]);
int main(int argc, char *argv[]) {
int size, rank, blksz, i, j, k;
float a[N][N], b[N][N], c[N][N];
char *usage = "Usage: %s file\n";
float row[N][N], col[N][N];
FILE *fd;
int portion, lowerbound, upperbound;
double elapsed_time, start_time, end_time;
struct timeval tv1, tv2;
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Comm_size(MPI_COMM_WORLD, &size);
blksz = (int) ceil((double) N / size);
/*
if (argc < 2) {
fprintf (stderr, usage, argv[0]);
return -1;
}
if ((fd = fopen(argv[1], "r")) == NULL) {
fprintf(stderr, "%s: Cannot open file %s for reading.\n", argv[0],argv[1]);
fprintf(stderr, usage, argv[0]);
return -1;
}
*/
//Read input from file for matrices a and b.
//The I/O is not timed because this I/O needs
//to be done regardless of whether this program
//is run sequentially on one processor or in
//parallel on many processors. Therefore, it is
//irrelevant when considering speedup.
if (rank == 0) {
for (i = 0; i < N; i++)
for (j = 0; j < N; j++)
a[i][j] = i + j;
for (i = 0; i < N; i++)
for (j = 0; j < N; j++)
b[i][j] = i + j;
/*
for (i = 0; i < N; i++) {
for (j = i + 1; j < N; j++) {
int temp = b[i][j];
b[i][j] = b[j][i];
b[j][i] = temp;
}
}
*/
}
//TODO: Add a barrier prior to the time stamp.
MPI_Barrier(MPI_COMM_WORLD);
// Take a time stamp
gettimeofday(&tv1, NULL);
//TODO: Scatter the input matrices a and b.
MPI_Scatter(a, blksz * N, MPI_FLOAT, row, blksz * N, MPI_FLOAT, 0,
MPI_COMM_WORLD);
MPI_Scatter(b, blksz * N, MPI_FLOAT, col, blksz * N, MPI_FLOAT, 0,
MPI_COMM_WORLD);
//TODO: Add code to implement matrix multiplication (C=AxB) in parallel.
for (i = 0; i < blksz && rank * blksz + i < N; i++) {
for (j = 0; j < N; j++) {
c[i][j] = 0.0;
for (k = 0; k < N; k++) {
c[i][j] += row[i][j] * col[j][k];
}
}
}
//TODO: Gather partial result back to the master process.
MPI_Gather(c, blksz * N, MPI_FLOAT, c, blksz * N, MPI_FLOAT, 0,
MPI_COMM_WORLD);
// Take a time stamp. This won't happen until after the master
// process has gathered all the input from the other processes.
gettimeofday(&tv2, NULL);
elapsed_time = (tv2.tv_sec - tv1.tv_sec) + ((tv2.tv_usec - tv1.tv_usec)
/ 1000000.0);
printf("elapsed_time=\t%lf (seconds)\n", elapsed_time);
// print results
MPI_Barrier(MPI_COMM_WORLD);
print_results("C = ", c);
MPI_Finalize();
}
print_results(char *prompt, float a[N][N]) {
int i, j;
printf("\n\n%s\n", prompt);
for (i = 0; i < N; i++) {
for (j = 0; j < N; j++) {
printf(" %.2f", a[i][j]);
}
printf("\n");
}
printf("\n\n");
}
You computational kernel is wrong. As b is supposedly transposed and ci,j is simply the dot product of row i from a and row j from b, the innermost loop should read:
for (k = 0; k < N; k++) {
c[i][j] += row[i][k] * col[j][k]; // row[i][k] and not row[i][j]
}
Besides your matrices are float but in the (commented out) transposition code the temp variable is int. It might work for that particular case, because you initialise the elements of a and b with integers, but won't work in the general case.
Otherwise the scatter/gather part looks fine. Mind that your code would not work if N is not divisble by the number of MPI processes. To handle those cases you might want to look into using MPI_Scatterv and MPI_Gatherv.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With