Faster Algorithms for Longest Common Substring
In the classic longest common substring (LCS) problem, we are given two strings S and T, each of length at most n, over an alphabet of size σ, and we are asked to find a longest string occurring as a fragment of both S and T. Weiner, in his seminal paper that introduced the suffix tree, presented an 𝒪(n logσ)-time algorithm for this problem [SWAT 1973]. For polynomially-bounded integer alphabets, the linear-time construction of suffix trees by Farach yielded an 𝒪(n)-time algorithm for the LCS problem [FOCS 1997]. However, for small alphabets, this is not necessarily optimal for the LCS problem in the word RAM model of computation, in which the strings can be stored in 𝒪(n logσ/log n ) space and read in 𝒪(n logσ/log n ) time. We show that, in this model, we can compute an LCS in time 𝒪(n logσ / √(log n)), which is sublinear in n if σ=2^o(√(log n)) (in particular, if σ=𝒪(1)), using optimal space 𝒪(n logσ/log n). We then lift our ideas to the problem of computing a k-mismatch LCS, which has received considerable attention in recent years. In this problem, the aim is to compute a longest substring of S that occurs in T with at most k mismatches. Thankachan et al. showed how to compute a k-mismatch LCS in 𝒪(n log^k n) time for k=𝒪(1) [J. Comput. Biol. 2016]. We show an 𝒪(n log^k-1/2 n)-time algorithm, for any constant k>0 and irrespective of the alphabet size, using 𝒪(n) space as the previous approaches. We thus notably break through the well-known n log^k n barrier, which stems from a recursive heavy-path decomposition technique that was first introduced in the seminal paper of Cole et al. [STOC 2004] for string indexing with k errors.
READ FULL TEXT