I have a set of files whose lengths are all multiples of the page-size of my operating system (FreeBSD 10). I would like to mmap() these files to consecutive pages of RAM, giving me the ability to treat a collection of files as one large array of data.
Preferably using portable functions, how can I find a sufficiently large region of unmapped address space so I can be sure that a series of mmap() calls to this region is going to be successful?
Follow these steps:
mmap. If this fails, you lose.unmap the area (actually, unmap may not be necessary if your system's mmap with a fixed address implicitly unmaps any previous overlapping region).MAP_FIXED flag.This should be fully portable to any POSIX system, but some OSes might have quirks that prevent this method. Try it.
You could mmap a large region where the size is the sum of the sizes of all files, using MAP_PRIVATE | MAP_ANON, and protection PROT_NONE which would prevent the OS from unnecessarily committing the memory charges.
This will reserve but not commit memory.
You could then open file filename1 at [baseAddr, size1) and open filename2 at [baseAddr + size1, baseAddr + size1 + size2), and so on.
I believe the flags for this are MAP_FIXED | MAP_PRIVATE.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With