Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Sklearn-GMM on large datasets

I have a large data-set (I can't fit entire data on memory). I want to fit a GMM on this data set.

Can I use GMM.fit() (sklearn.mixture.GMM) repeatedly on mini batch of data ??

like image 469
abilng Avatar asked Sep 20 '25 17:09

abilng


1 Answers

There is no reason to fit it repeatedly. Just randomly sample as many data points as you think your machine can compute in a reasonable time. If variation is not very high, the random sample will have approximately the same distribution as the full dataset.

randomly_sampled = np.random.choice(full_dataset, size=10000, replace=False)
#If data does not fit in memory you can find a way to randomly sample when you read it

GMM.fit(randomly_sampled)

And the use

GMM.predict(full_dataset)
# Again you can fit one by one or batch by batch if you cannot read it in memory

on the rest to classify them.

like image 122
Gioelelm Avatar answered Sep 22 '25 09:09

Gioelelm