Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Can we build word2vec model in a distributed way?

Currently I have 1.2tb text data to build gensim's word2vec model. It is almost taking 15 to 20 days to complete.

I want to build model for 5tb of text data, then it might take few months to create model. I need to minimise this execution time. Is there any way we can use multiple big systems to create model?

Please suggest any way which can help me in reducing the execution time.

FYI, I have all my data in S3 and I use smart_open module to stream the data.

like image 596
Uma Maheswara Rao Pinninti Avatar asked Nov 19 '25 11:11

Uma Maheswara Rao Pinninti


1 Answers

You can use Apache Spark. https://javadoc.io/doc/org.apache.spark/spark-mllib_2.12/latest/org/apache/spark/mllib/feature/Word2Vec.html

Word2Vec creates vector representation of words in a text corpus. The algorithm first constructs a vocabulary from the corpus and then learns vector representation of words in the vocabulary. The vector representation can be used as features in natural language processing and machine learning algorithms.

To make our implementation more scalable, we train each partition separately and merge the model of each partition after each iteration. To make the model more accurate, multiple iterations may be needed.

Apache/Spark at e053c55

like image 80
Guillaume Massé Avatar answered Nov 21 '25 09:11

Guillaume Massé



Donate For Us

If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!