I have existing C++11 code using std::array in the following form:
#include <array>
const unsigned int arraySize = 1024;
#define ARRAY_DEF std::array<int, arraySize>
int main()
{
ARRAY_DEF x;
x.fill(1);
return 0;
}
Throughout the code, I use the ARRAY_DEF for easy readability, and make it easier to maintain. No problems there.
Now I'd like to port the code to run in CUDA on the GPU. Problem, as std::array cannot run on the device.
I think I need to leverage thrust::device_vector, but I can't see an easy way to declare a vector of static size in a #define. (I only see doing it after the variable name in the constructor, which defeats the point of using the #define.)
Is there another approach to declaring the vector, with static size, within a #define?
Or is there perhaps another class I can use within CUDA libraries to mimic the std::array to run on the device?
Thanks all! Sadly, none of these answers fit my need. I took matters into my own hands, and created a class which mimics std::array (mostly), can run in a device/kernel function, and was largely a find/replace to edit. (Ok, I needed to replace other STL functions, but that's another question.) https://github.com/MikeBSilverman/CUDAHostDeviceArray (dead link)
Hope it helps someone else.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With