1

I'm implementing algorithms in http://portal.acm.org/citation.cfm?id=1813708 that utilize suffix arrays to find longest common substrings. The algorithms involve constructing a suffix array for a string that is the concatenation of a set of given strings with string separators called sentinels. So for example, if we are given the strings a, b and c, a new string d is created which is a$1b$2c$3 where $1, $2, $3 are the sentinel characters marking the ends of each string. The sentinel characters must be unique and lexicographically less than all other characters in a, b and c.

My question revolves around the representation of the sentinel characters in Python. If a, b and c are ASCII strings, I'm thinking I might need to convert those strings to UTF-8 and shift their range from 0-127 to a higher range such that there are characters available that are lexicographically less than those in the strings. If that seems reasonable, what is the most efficient mechanism for remapping the characters in Python such that their range is N-127+N where N is the number of strings provided?

Chris
  • 3,109
  • 7
  • 29
  • 39

2 Answers2

1

You can do this using Unicode (not UTF-8) strings. In Python 3, all strings are Unicode but in Python 2, you need the u prefix (ie. "hello" is not Unicode but u"world" is).

>>> s = u"string one"
>>> N = 3
>>> "".join(unichr(ord(x) + N) for x in s)
u'vwulqj#rqh'

For Python 3, this would be slightly simpler:

>>> s = "string one"
>>> N = 3
>>> "".join(chr(ord(x) + N) for x in s)
'vwulqj#rqh'
Greg Hewgill
  • 951,095
  • 183
  • 1,149
  • 1,285
0

I think you should use a tokenizer and replace each string with an integer. Then for sentinels, there will be plenty of integers left over. Probably, it's more convenient to use the larger integers as sentinels rather than the small ones. For printout, you can use whatever Unicode character you want, and you may as well use the same character for all of them.

Are you implementing Yamamoto & Church? If so, have a look at some newer literature before you start. I recommend Abouelhoda et al Extended Suffix Array and Kim, Kim & Park, Linearized Suffix Trees. And if you like combinatorics, look at: Schürmann, Klaus-Bernd, Suffix arrays in theory and practice.

Also, I recommend 3-way radix quicksort, as opposed to a specialized suffix sorting algorithm. You only need the suffix sorting algorithm in case of redundancies in your corpus. But these redundancies are unnecessary, and will screw up your statistics.

And if you make something interesting, I would be interested to see

Dale Gerdemann

Dale Gerdemann
  • 739
  • 5
  • 7
  • Thanks Dale. I am currently reimplementing my unicode version to use integers as you suggest. Unicode introduced some scale limitations that I need to overcome. Appreciate the pointers to references. Some of these I have not yet seen. Thanks again. – Chris Feb 27 '11 at 04:30
  • Couple thoughts: If you have long repeats, then you need a suffix sort algorithm rather than a string sort. But if you do use a string sort, then modify it to report what portion of the text was being sorted just before stack overflow. For natural language texts, long repeats are quotes, cut-paste, plagiarism etc, which may need to be removed to avoid skewing ngram statistics. To find other longest repeats, traverse implicit interval tree, collect maximal with doc_freq > k and put onto priority queue. It's a simple idea, but it's not clear to me (yet) that the cited paper does better. – Dale Gerdemann May 06 '11 at 03:13