I've discovered by accident that an STL vector defined as follows:
vector < float > test;
test.resize(10000 * 10000 * 5);
Uses up significant less space in RAM than the following definition:
std::vector<std::vector<std::vector< float > > > test;
test.resize(10000);
for(int i = 0;i < 10000;i++)
{
test[i].resize(10000);
for(int j = 0;j < 10000;j++)
{
test[i][j].resize(5);
}
}
The linear vector method (top one) uses the correct amount of RAM (2Gb) as would be calculated by hand. So my question is, why does a 3D vector use up way more RAM than a linear one, I found it was significantly more in this example (about 4Gb).
In the former case you have:
sizeof(vector<float>) // outermost vector
+ 10000 * 10000 * 5 * sizeof(float) // xyz space
In the latter you have:
sizeof(vector<vector<vector<float>>>) // outermost vector
+ 10000 * sizeof(vector<vector<float>>) // x axis
+ 10000 * 10000 * sizeof(vector<float>) // xy plane
+ 10000 * 10000 * 5 * sizeof<float> // xyz space
The typical value for sizeof(vector<T>) for any T is 3 * sizeof(T*), which is also, I believe, the minimal value allowed by the standard—capacity must be distinct from size because reserve() must change the value of capacity() but not of size().
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With