I was recently pondering the following scenario: suppose you have a huge database and you want to perform some calculations while loading some of its part. It can be the case, that even small part of that database might not fit into Java's heap memory which is quite limited. How do people go about solving these obstacles? How does google perform analysis on Terabytes of data with limited memory space?
Thanks in advance for your replies.
The short answer is that you need to process the data in chunks that do fit into memory and then assemble the results of these chunked computes into a final answer (possibly in multiple stages). A common distributed paradigm for this is Map Reduce: see here for details on Google's original implementation, and Hadoop for an open-source implementation.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With