I have a large set of short strings. What are some algorithms and indexing strategies for filtering the list on items that contain a substring? For example, suppose I have a list:
val words = List(
"pick",
"prepick",
"picks",
"picking",
"kingly"
...
)
How could I find strings that contain the substring "king"? I could brute force the problem like so:
words.filter(_.indexOf("king") != -1) // yields List("picking", "kingly")
This is only practical for small sets; Today I need to support 10 million strings, with a future goal in the billions. Obviously I need to build an index. What kind of index?
I have looked at using an ngram index stored in MySQL, but I am not sure if this is the best approach. I'm not sure how to optimally query the index when the search string is longer than the ngram size.
I have also considered using Lucene, but this is optimized around token matching, not substring matching, and does not seem to support the requirement of simple substring matching. Lucene does have a few classes related to ngrams (org.apache.lucene.analysis.ngram.NGramTokenFilter is one example), but these seem to be intended for spell check and autocomplete use cases, not substring matching, and the documentation is thin.
What other algorithms and indexing strategies should I consider? Are there any open source libraries that support this? Can the SQL or Lucene strategies (above) be made to work?
Another way to illustrate the requirement is with SQL:
SELECT word FROM words WHERE word LIKE CONCAT('%', ?, '%');
Where ? is a user provided search string, and the result is a list of words that contain the search string.
How big is the longest word? if that's about 7-8 char you may find all substrings for each and every string and insert that substrings in trie (the one is used in Aho-Corasik - http://en.wikipedia.org/wiki/Aho-Corasick) It will take some time to build the tree but then searching for all occurances will be O(length(searched word)).
Postgres has a module which does a trigram index
That seems an interesting idea too- building a trigram index.
About a comment in your question regarding how to break down text searches greater than n-gram length:
Here's one approach which will work:
Say we have a search string as "abcde" , and we have built a trigram index. (You have strings which are of smaller lengths-this could hit a sweet spot for you) Let the search results of abc= S1, bcd=S2,cde=S3 (where S1,S2,S3 are sets of indexes )
Then the longest common substring of S1,S2,S3 will give the indexes that we want.
We can transform each set of indexes,as a single string separated by a delimiter (say space) before doing LCS.
After we find the LCS,we would have to search the indexes for the complete pattern,since we have broken down the search term. ie we would have to prune results which have "abc-XYZ- bcd-HJI-def"
The LCS of a set of strings can be efficiently found Suffix Arrays. or Suffix trees
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With