I would like to "nest" parallel for using OpenMP. Here is a toy code:
#include <iostream>
#include <cmath>
void subproblem(int m) {
#pragma omp parallel for
for (int j{0}; j < m; ++j) {
double sum{0.0};
for (int k{0}; k < 10000000; ++k) {
sum += std::cos(static_cast<double>(k));
}
#pragma omp critical
{ std::cout << "Sum: " << sum << std::endl; }
}
}
int main(int argc, const char *argv[]) {
int n{2};
int m{8};
#pragma omp parallel for
for (int i{0}; i < n; ++i) {
subproblem(m);
}
return 0;
}
Here is what I want:
So far, I have only found a solution that disables nested parallelism or always allow it, but I am looking at a way to enable it only if the number of threads launched is below the number of cores.
Is there an OpenMP solution for that using tasks?
Rather than using a pair of nested parallel sections, you can tell OpenMP to "collapse" the nested loops into a single parallel section over the n*m iteration space:
#pragma omp parallel for collapse(2)
for (int i{0}; i < n; ++i) {
for (int j{0}; j < m; ++j) {
// ...
}
}
This will allow it to divide the work appropriately regardless of the relative values of n and m.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With