Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Solution to NSFetchResultsController memory handling

Recently I was working on an application that allowed the ability to create and browse through a large quantity of data at a time. In this application, it is possible to have 2,000+ objects that can be viewed in a collectionView from a NSFetchedResultsController at a single point in time.

When testing this scenario, it was soon discovered and verified that NSFetchedResultsController does not optimally page NSManagedObjects in and out of memory when accessing batched indexes.

As indexes are accessed from the NSFetchedResultsController and more NSManagedObjects are loaded, it seemed that NSFetchedResultsController would retain the previously loaded NSManagedObjects in memory rather page them in and out of memory per accessed indexes set using the fetchBatchSize set in the fetchRequest. To explain this further, if the fetchBatchSize is set to 45, and index 1,000 is now loaded, the NSFetchedResultsController seems to hold onto the memory for everything prior to index 1,000 that has been loaded via index access rather faulted.

If managedObjectContext.reset() is called on the parent context, the memory is properly released until the objects are loaded again via index access. The fetchBatchSize seems to only prevent all of the data from being loaded at once. When indexes are access, the NSFetchedResultsController seems to load the new batch, while holding onto the previously loaded batches.

My current contemplations are leading me to create my own paging where I'll have multiple fetchResultsControllers with fetchBatchSizes and fetchLimits... which seems far more complicated then it should be. This will create a manual way of paging the data in out out of memory allowing proper memory management.

My question is if there’s a better way to approach this? I essentially would like to page infinite objects similar to Apple's Photos application (which their performance blows my mind), while using CoreData.

Note: There have been plenty of posts on this exact issue noting that people have had to work out manual solutions to improve memory management (Example: link#1, link#2). I’m not looking for answers that note “did you forget this” or “you probably did this wrong”. I can verify without doubt, I am properly using the APIs as documented. I’m looking for a solution that helps manage the memory of NSFetchedResultsController better. Setting fetchBatchSize is a temporary solution until the user has loaded a high number of indexes that leads to a memory warning being fired off and me resetting the managedObjectContext. And before noted, yes I've testing with and without NSFetchedResultsController cache being set and reset accordingly.... if anything I've found separate issues with the cache itself and it doesn't really help with the problem at hand what-so-ever.

like image 380
TheCodingArt Avatar asked Sep 08 '25 10:09

TheCodingArt


1 Answers

I suspect that you are not using the fetched results controller or Core Data correctly. I have had scenarios with 150.000 records and more powered by a fetched results controller with no memory (or performance) problems.

Some details about the setup that worked without quirks:

  • context is of course main thread context
  • fetchBatchSize is not set
  • (optional) use a uniquely named cache (last argument in fetched results controller init)
    • you will have to reset the cache when reloading this controller

Also make sure that you are not storing huge images or other BLOBs directly in Core Data - that will definitely degrade performance and result in memory management problems. If you need to display images, lazy-load them.

like image 74
Mundi Avatar answered Sep 11 '25 07:09

Mundi