Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Speeding up vectorization in sklearn

First question, sorry if I mess something up.

I'm doing a classification project involving 1600 unique text documents over 90 labels. Many of these documents are research papers, so you can imagine the feature set is quite large - well over a million.

My problem is that vectorizing is taking forever. I understand it won't be fast given my data, but the time it takes is becoming impractical. I took the advice from the first answer to this question and it doesn't seem to have helped - I'm imagining the optimizations the answerer suggests are already incorporated into scikit-learn.

Here's my code, using the adjusted stemmed vectorizer functions:

%%timeit

vect = StemmedCountVectorizer(min_df=3, max_df=0.7, max_features=200000, tokenizer=tokenize,
        strip_accents='unicode', analyzer='word', token_pattern=r'\w{1,}',
        ngram_range=(1, 3), stop_words='english')

vect.fit(list(xtrain) + list(xvalid))
xtrain_cv = vect.transform(xtrain)
xvalid_cv = vect.transform(xvalid)

The tokenizer references this function:

stemmer = SnowballStemmer('english')

def stem_tokens(tokens, stemmer):
    stemmed = []
    for item in tokens:
        stemmed.append(stemmer.stem(item))
    return stemmed

def tokenize(text):
    tokens = nltk.word_tokenize(text)
    tokens = [i for i in tokens if i not in string.punctuation]
    tokens = [i for i in tokens if all(j.isalpha() or j in string.punctuation for j in i)]
    tokens = [i for i in tokens if '/' not in i]
    stems = stem_tokens(tokens, stemmer)
    return stems

The %%timeit report:

24min 16s ± 28.2 s per loop (mean ± std. dev. of 7 runs, 1 loop each)

Is there anything that's obviously slowing me down? Any obvious inefficiencies would be good to know about. I'm thinking about reducing my n-gram range to (1,2) as I don't think I'm getting too many useful 3-gram features, but besides that I'm not sure what else to do.

like image 498
Daniel Francis Avatar asked Oct 19 '25 10:10

Daniel Francis


1 Answers

1600 text documents is not really that big, so it should be much faster. Some advises:

1) To profile your code, use cProfile and ptats. You'll see what exact steps are slow.

2) n-grams have huge complexity. Bi-grams are usually ok, tri-grams start being very cumbersome. Use a "smarter" solution. Why not the gensim phraser ?

3) using in operator doesn't do well with lists (because it tests each element of the list), but does well with sets (because of the underlying hash function). You should consider strings, like string.punctuation, as lists. Just convert it to a set.

4) Factorise your tokenize function (multiple loops on token) if you can.

5) If it is not fast enough, use multi threading.

like image 142
Robin Avatar answered Oct 22 '25 02:10

Robin



Donate For Us

If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!