I have the following code for taking a word from the input text file and printing the synonyms, definitions and example sentences for the word using WordNet. It separates the synonyms from the synset based on the part-of-speech, i.e., the synonyms that are verbs and the synonyms that are adjectives are printed separately.
Example for the word flabbergasted the synonyms are 1) flabbergast , boggle , bowl over which are verbs and 2)dumbfounded , dumfounded , flabbergasted , stupefied , thunderstruck , dumbstruck , dumbstricken which are adjectives.
How do I print the part-of-speech along with the synonyms? I have provided the code I have so far below:
import nltk
from nltk.corpus import wordnet as wn
tokenizer = nltk.data.load('tokenizers/punkt/english.pickle')
fp = open('sample.txt','r')
data = fp.read()
tokens= nltk.wordpunct_tokenize(data)
text = nltk.Text(tokens)
words = [w.lower() for w in text]
for a in words:
print a
syns = wn.synsets(a)
for s in syns:
print
print "definition:" s.definition
print "synonyms:"
for l in s.lemmas:
print l.name
print "examples:"
for b in s.examples:
print b
print
Simply call pos() on a synset. To list all the POS for a lemma:
>>> from nltk.corpus import wordnet as wn
>>> syns = wn.synsets('dog')
>>> set([x.pos() for x in syns])
{'n', 'v'}
Unfortunately this doesn't seem to be documented anywhere except the source code, which shows other methods that can be called on a synset.
Synset attributes, accessible via methods with the same name:
name: The canonical name of this synset, formed using the first lemma of this synset. Note that this may be different from the name passed to the constructor if that string used a different lemma to
identify the synset.pos: The synset's part of speech, matching one of the module level attributes ADJ, ADJ_SAT, ADV, NOUN or VERB.lemmas: A list of the Lemma objects for this synset.definition: The definition for this synset.examples: A list of example strings for this synset.offset: The offset in the WordNet dict file of this synset.lexname: The name of the lexicographer file containing this synset.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With