Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Nltk word tokenizer treats ending single quote as a separate word

Tags:

python

nltk

Here's a code snippet from the IPython notebook:

test = "'v'"
words = word_tokenize(test)
words

And the output is:

["'v", "'"]

As you can see the ending single quote is treated as a separate word, while the first one is a part of "v". I want to have

["'v'"]

or

["'", "v", "'"]

Is there any way to achieve this?

like image 279
Lingviston Avatar asked Oct 20 '25 04:10

Lingviston


2 Answers

Seems like it's a not a bug but the expected output from nltk.word_tokenize().

This is consistent with the Treebank word tokenizer from Robert McIntyre tokenizer.sed

$ sed -f tokenizer.sed 
'v'
'v ' 

As @Prateek pointed out, you can try other tokenizers that might suit your needs.


The more interesting question is why does the starting single quote stick to a the following character?

Couldn't we hack the TreebankWordTokenizer, like what was done at https://github.com/nltk/nltk/blob/develop/nltk/tokenize/init.py ?

import re

from nltk.tokenize.treebank import TreebankWordTokenizer

# Standard word tokenizer.
_treebank_word_tokenizer = TreebankWordTokenizer()

# See discussion on https://github.com/nltk/nltk/pull/1437
# Adding to TreebankWordTokenizer, the splits on
# - chervon quotes u'\xab' and u'\xbb' .
# - unicode quotes u'\u2018', u'\u2019', u'\u201c' and u'\u201d'

improved_open_quote_regex = re.compile(u'([«“‘„]|[`]+|[\']+)', re.U)
improved_close_quote_regex = re.compile(u'([»”’])', re.U)
improved_punct_regex = re.compile(r'([^\.])(\.)([\]\)}>"\'' u'»”’ ' r']*)\s*$', re.U)
_treebank_word_tokenizer.STARTING_QUOTES.insert(0, (improved_open_quote_regex, r' \1 '))
_treebank_word_tokenizer.ENDING_QUOTES.insert(0, (improved_close_quote_regex, r' \1 '))
_treebank_word_tokenizer.PUNCTUATION.insert(0, (improved_punct_regex, r'\1 \2 \3 '))

_treebank_word_tokenizer.tokenize("'v'")

[out]:

["'", 'v', "'"]

Yes the modification would work for the string in the OP but it'll start to break all the clitics, e.g.

>>> print(_treebank_word_tokenizer.tokenize("'v', I've been fooled but I'll seek revenge."))
["'", 'v', "'", ',', 'I', "'", 've', 'been', 'fooled', 'but', 'I', "'", 'll', 'seek', 'revenge', '.']

Note that the original nltk.word_tokenize() keeps the starting single quotes to the clitics and outputs this instead:

>>> print(nltk.word_tokenize("'v', I've been fooled but I'll seek revenge."))
["'v", "'", ',', 'I', "'ve", 'been', 'fooled', 'but', 'I', "'ll", 'seek', 'revenge', '.']

There are strategies to handle the ending quotes but not the starting quotes after a clitics at https://github.com/nltk/nltk/blob/develop/nltk/tokenize/treebank.py#L268

But the main reason for this "problem" is because the Word Tokenizer doesn't have a sense of balancing the quotations mark. If we look at the MosesTokenizer, there are a lot more mechanisms to handle quotes.


Interestingly, Stanford CoreNLP doesn't do that.

In terminal:

wget http://nlp.stanford.edu/software/stanford-corenlp-full-2016-10-31.zip
unzip stanford-corenlp-full-2016-10-31.zip && cd stanford-corenlp-full-2016-10-31

java -mx4g -cp "*" edu.stanford.nlp.pipeline.StanfordCoreNLPServer \
-preload tokenize,ssplit,pos,lemma,parse,depparse \
-status_port 9000 -port 9000 -timeout 15000

Python:

>>> from nltk.parse.corenlp import CoreNLPParser
>>> parser = CoreNLPParser()
>>> parser.tokenize("'v'")
<generator object GenericCoreNLPParser.tokenize at 0x1148f9af0>
>>> list(parser.tokenize("'v'"))
["'", 'v', "'"]
>>> list(parser.tokenize("I've"))
['I', "'", 've']
>>> list(parser.tokenize("I've'"))
['I', "'ve", "'"]
>>> list(parser.tokenize("I'lk'"))
['I', "'", 'lk', "'"]
>>> list(parser.tokenize("I'lk"))
['I', "'", 'lk']
>>> list(parser.tokenize("I'll"))
['I', "'", 'll']

Looks like there's some sort of regex hack put in to recognize/correct the English clitics

If we do some reverse engineering:

>>> list(parser.tokenize("'re"))
["'", 're']
>>> list(parser.tokenize("you're"))
['you', "'", 're']
>>> list(parser.tokenize("you're'"))
['you', "'re", "'"]
>>> list(parser.tokenize("you 're'"))
['you', "'re", "'"]
>>> list(parser.tokenize("you the 're'"))
['you', 'the', "'re", "'"]

It's possible to add a regex to patch word_tokenize, e.g.

>>> import re
>>> pattern = re.compile(r"(?i)(\')(?!ve|ll|t)(\w)\b")
>>> pattern.sub(r'\1 \2', x)
"I'll be going home I've the ' v ' isn't want I want to split but I want to catch tokens like ' v and ' w ' ."
>>> x = "I 'll be going home I 've the 'v ' isn't want I want to split but I want to catch tokens like 'v and 'w ' ."
>>> pattern.sub(r'\1 \2', x)
"I 'll be going home I 've the ' v ' isn't want I want to split but I want to catch tokens like ' v and ' w ' ."

So we can do something like:

import re
from nltk.tokenize.treebank import TreebankWordTokenizer

# Standard word tokenizer.
_treebank_word_tokenizer = TreebankWordTokenizer()

# See discussion on https://github.com/nltk/nltk/pull/1437
# Adding to TreebankWordTokenizer, the splits on
# - chervon quotes u'\xab' and u'\xbb' .
# - unicode quotes u'\u2018', u'\u2019', u'\u201c' and u'\u201d'

improved_open_quote_regex = re.compile(u'([«“‘„]|[`]+)', re.U)
improved_open_single_quote_regex = re.compile(r"(?i)(\')(?!re|ve|ll|m|t|s|d)(\w)\b", re.U)
improved_close_quote_regex = re.compile(u'([»”’])', re.U)
improved_punct_regex = re.compile(r'([^\.])(\.)([\]\)}>"\'' u'»”’ ' r']*)\s*$', re.U)
_treebank_word_tokenizer.STARTING_QUOTES.insert(0, (improved_open_quote_regex, r' \1 '))
_treebank_word_tokenizer.STARTING_QUOTES.append((improved_open_single_quote_regex, r'\1 \2'))
_treebank_word_tokenizer.ENDING_QUOTES.insert(0, (improved_close_quote_regex, r' \1 '))
_treebank_word_tokenizer.PUNCTUATION.insert(0, (improved_punct_regex, r'\1 \2 \3 '))

def word_tokenize(text, language='english', preserve_line=False):
    """
    Return a tokenized copy of *text*,
    using NLTK's recommended word tokenizer
    (currently an improved :class:`.TreebankWordTokenizer`
    along with :class:`.PunktSentenceTokenizer`
    for the specified language).

    :param text: text to split into words
    :type text: str
    :param language: the model name in the Punkt corpus
    :type language: str
    :param preserve_line: An option to keep the preserve the sentence and not sentence tokenize it.
    :type preserver_line: bool
    """
    sentences = [text] if preserve_line else sent_tokenize(text, language)
    return [token for sent in sentences
            for token in _treebank_word_tokenizer.tokenize(sent)]

[out]:

>>> print(word_tokenize("The 'v', I've been fooled but I'll seek revenge."))
['The', "'", 'v', "'", ',', 'I', "'ve", 'been', 'fooled', 'but', 'I', "'ll", 'seek', 'revenge', '.']
>>> word_tokenize("'v' 're'")
["'", 'v', "'", "'re", "'"]
like image 185
alvas Avatar answered Oct 21 '25 18:10

alvas


Try from nltk.tokenize.moses import MosesTokenizer, MosesDetokenizer

from nltk.tokenize.moses import MosesTokenizer, MosesDetokenizer
t, d = MosesTokenizer(), MosesDetokenizer()
tokens = t.tokenize(test)
tokens
['&apos;v&apos;']

where &apos; = '

You can also use the escape=False arguments to prevent the escaping of XML special character:

>>> m.tokenize("'v'", escape=False)
["'v'"]

The output to keep the 'v' is consistent with the original Moses tokenizer, i.e.

~/mosesdecoder/scripts/tokenizer$ perl tokenizer.perl -l en < x
Tokenizer Version 1.1
Language: en
Number of threads: 1
&apos;v&apos;

There are other tokenizers if you wish to explore and have handling of single quotes too.

like image 45
Morse Avatar answered Oct 21 '25 18:10

Morse