A new algorithm for “the LCS problem” with application in compressing genome resequencing data

Background The longest common subsequence (LCS) problem is a classical problem in computer science, and forms the basis of the current best-performing reference-based compression schemes for genome resequencing data. Methods First, we present a new algorithm for the LCS problem. Using the generalized suffix tree, we identify the common substrings shared between the two input sequences. Using the maximal common substrings, we construct a directed acyclic graph (DAG), based on which we determine the LCS as the longest path in the DAG. Then, we introduce an LCS-motivated reference-based compression scheme using the components of the LCS, rather than the LCS itself. Results Our basic scheme compressed the Homo sapiens genome (with an original size of 3,080,436,051 bytes) to 15,460,478 bytes. An improvement on the basic method further reduced this to 8,556,708 bytes, or an overall compression ratio of 360. This can be compared to the previous state-of-the-art compression ratios of 157 (Wang and Zhang, 2011) and 171 (Pinho, Pratas, and Garcia, 2011). Conclusion We propose a new algorithm to address the longest common subsequence problem. Motivated by our LCS algorithm, we introduce a new reference-based compression scheme for genome resequencing data. Comparative results against state-of-the-art reference-based compression algorithms demonstrate the performance of the proposed method.

Using LCS to denote the dynamic programming (DP) table, the basic formulation is as follows, given 0 ≤ i ≤ n and 0 ≤ j ≤ m: The above computes the length of the LCS in the last position of the table (LCS(n, m)). As with the edit distance computation, the actual string forming the LCS can be obtained by using a trace back on the DP table. This requires O(nm) time and O(nm) space. The LCS matrix has some interesting properties: the entries in any row or in any column are monotonically increasing, and between any two consecutive entries in any row or column, the difference is either 0 or 1. An example LCS matrix and trace are shown in Fig. 1.
Alternatively, we can formulate the problem as a twodimensional grid, where the goal is to find the minimal cost (or maximal cost, depending on the formulation) path, from the start position on the grid (typically, (0,0)), to the end position (n, m). Myers et al. [11] and Ukkonen [12] used this idea to propose a minimum cost path determination problem on the grid, where the path takes a diagonal line from (i − 1, j − 1) to (i, j) if S 1 [i] = S 2 [j] with cost 0, and takes a horizontal or vertical line with a cost of 1, corresponding respectively to insert or delete operations. Hunt and Szymanski [13] earlier used an essentially similar approach to solve the LCS problem in (r + n) log n time, with n m, where r is the number of pairwise symbol matches (S 1 [i] = S 2 [j]). When two non-similar files are compared, we will have r nm, or r in O(n), leading to a practical O(n log n) time algorithm. However, for very similar files, we have r ≈ nm, or an O(nm log n) algorithm. This worst-case occurs, for instance, when S 1 = a n and S 2 = a m .
Hirschberg [14] proposed space-efficient approaches to compute the LCS using DP in O(nm) time and O(n + m)  [15] used the observation on monotonically increasing values in the LCS table to identify the "corner points", where the values on the diagonals change from one row to the next. The corners define a more sparse 2D grid, based on which they determine the LCS.
A generalization of the LCS problem is to find the LCS for a set of two or more sequences. This is the multiple longest common subsequence problem, which is known to be NP-hard for an arbitrary number of sequences [16]. Another interesting view of the LCS problem is in terms of the longest increasing subsequence (LIS) problem, suggested earlier in [17][18][19], and described in detail in [2]. The LIS approach also solves the LCS problem in O(r log n) time (where m ≤ n). In most practical scenarios, r < nm.
The LCS has been used in some recent algorithms to compress genome resequencing data [20,21]. Compression of biological sequences is an important and difficult problem, which has been studied for decades by various authors [22][23][24]. See [9,25,26] for recent surveys. Most of the earlier studies focused on lossless compression because it was believed that biological sequences should not admit any data loss, since that would impact later use of the compressed data. The earlier methods also generally exploited self-contained redundancies, without using a reference sequence. The advent of high-throughput next generation sequencing, with massive datasets that are easily generated for one experiment, have challenged both compression paradigms.
Lossy compression of high-throughput sequences admitting limited errors have been proposed in [27,28] for significant compression. With the compilation of several reference genomes for different species, more recent methods have considered lossless compression of re-sequencing data by exploiting the significant redundancies between the genomes of related species. This observation is the basis of various recently proposed methods for reference-based lossless compression [20,21], whereby some available standard reference genome is used as the dictionary. Compression ratios in the order of 80 to 18,000 without loss have been reported [20,21]. The LCS is the hallmark of these referencebased approaches. In this work, we first introduce a new algorithm for the LCS problem, using suffix trees and shortest-path graph algorithms. Motivated by our LCS algorithm, we introduce an improved reference-based compression scheme for resequencing data using the longest previous factor (LPF) data structure [29][30][31].

Preliminaries
The longest common subsequence is defined below in terms of length-1 common substrings.

LCS algorithm
Below, we compute the LCS between S 1 and S 2 in the following way. (i) We use the GST to compute the common substrings (CSSs) shared between S 1 and S 2 . (ii) We use the CSSs to construct a directed acyclic graph (DAG) of maximal CSSs. (iii) We compute LCS by finding the longest path in the DAG. Steps (i) and (iii) are standard tasks. For step (ii), we develop new algorithms and data structures.

Computing the CSSs
We now briefly describe finding the common substrings (CSSs) between S 1 and S 2 . In our LCS algorithm, for simplicity of discussion, we will only use CSSs of length-1.
Let A = ∅. Compute the GST on S 1 $ 1 • S 2 $ 2 , for terminals {$ 1 , $ 2 }. Consider a preorder traversal of the GST. When at depth-1 for a node N, let S = ∅. During the preorder traversal from N, we collect in S all of the suffix index leaves descending from N, which represent the suffixes that share the same first symbol. Let S 1 = S 2 = ∅. For s ∈ S, if s ≤ |S 1 |, then store s in S 1 . Otherwise, store s in S 2 . We represent all of our length-1 matches in the following structure: MATCH {id, p1, p2}. The id is a unique number for the MATCH, and p 1 and p 2 are respectively the positions in S 1 and S 2 where the CSS exists. Let id = 2. Now, for each s 1 ∈ S 1 , we create a new MATCH m = (id++, s 1 , s 2 ) for each s 2 ∈ S 2 . Store each m in A.
The running time is clearly the maximum of the GST construction and the number of length-1 CSSs. Lemma 2. Say n=|S 1 | and m=|S 2 |, then computing the η CSSs of length-1 between S 1 and S 2 requires O(max{n + m, η}) time.

DAG construction
Given all of the MATCHes found in A, our task now is to construct the DAG for A. For all paths of the DAG to start and end at a common node, we make MATCHes S and E to respectively precede and succeed the MATCHes in A. (Let S have id = 1 and E have id = |A + 2| and then store S and E in A.) The goal of the DAG is to represent all maximal CSSs between S 1 and S 2 as paths from S to E. We will later find the LCS, the longest such path.
In the DAG, the nodes will be the MATCH ids and the edges between MATCHes, say m 1 and m 2 , represent that S 1 [m 1  Our DAG construction, displayed in Algorithm 1, operates in the following way. We initialize the DAG dag by first declaring dag.gr of size |A|, since gr will represent all of the nodes. All outgoing edges for say the node N ∈ A are represented by dag.
. By setting dag.sz = {0, . . . , 0}, we clear the edges in our dag. Now, setting these edges is the main task of our algorithm.
We can easily construct the edges by assuming that there exists a data structure PREV pv that can tell us the set of parents for each node a ∈ A. That is, we can call getPrnts(pv, L) to get the set of nodes P that directly precede MATCH L ∈ A in the final dag. By "directly precede", we mean that in the final dag, there is connection from each p ∈ P to a, i.e. each p is in series with a, meaning that both p AND a are chosen in a maximal CSS. Further, no p, p2 ∈ P can be in series with one another, and rather, they are in parallel with one another, meaning that either p OR p2 is chosen in a maximal common subsequence.
With P, we can build an edge from a2 ∈ P to a by first allocating a new space in dag.gr[ a2.id] by incrementing dag.sz[ a2.id] and then making a directed edge from parent to child, i.e. dag.
After computing the incoming edges for each node a ∈ A, the dag construction is complete.

PREV data structure
The simplicity of the DAG construction is due to the PREV pv, detailed here. The pv is composed of four attributes.
HashMap<int,int> p1. Suppose that all a.p1 values (for a ∈ A) are placed on an integer number line. It is very unlikely that all a.p1 values will be consecutive and so, there will be unused numbers (gaps) between adjacent values. Since we later declare matrices on the MATCH p1 (and p2) values, these gaps will be wasteful. With a scan of the a.p1 values (say using a Set), we can rename them consecutively without gaps; these renamed values are found by accessing HashMap<int,int> p1 with the original a.p1 value.
HashMap<int,int> p2. This is the same as the aforementioned p1, but with respect to the a.p2 values.

MATCH tbl1[][].
A fundamental data structure to support the getPrnts function is the tbl1, defined below.
p1. This is done by a left-to-right scan of each row, comparing the adjacent elements, and The tbl2 is the same as tbl1 except that we define "closest" to mean that the a.p2 value is maximized before the a.p1.

getPrnts function
Given the PREV pv data structure on all MATCHes A, we call getPrnts(pv, L) in line 11 of constructDAG to retrieve the set of parent MATCHes P of the MATCH L ∈ A. Recall that these parents P of the MATCH L are all MATCHes that directly precede L in the DAG, i.e. each p ∈ P is in series with L and no p, p2 ∈ P are in series with one another. Using pv, we can compute, for any MATCH c ∈ A, two direct parents that are closest to c with respect to the p1 and p2 values.
The direct parent of c w.r.t. p2 is: The first getDPrnt in Algorithm 2 implements Definition 6 to return the direct parents for any MATCH say L ∈ A. In cases where we want to find the direct parent for a MATCH at a certain location in the pv.tbl1 or pv.tbl2, say pv.
The direct parents computation (getDPrnt) is the cornerstone of the getPrnts function. The following lemma, implemented in Algorithm 3, proves that the direct parents of c can be used to determine all parents of c. Lemma 7. Given A, the MATCHes between S 1 and S 2 , and a MATCH c ∈ A, the two direct parents of c can be used to compute the set P with all parents of c.
Proof. Let d1 and d2 be the direct parents of c (Definition 6). By Definition 3, d1 is a direct parent because it directly precedes c with the maximum p1 and the rightmost p2 value. Similarly by Definition 4, d2 is a direct parent of c because it directly precedes c with the maximum p2 and the rightmost p1 value. To find the remaining parents of c, we now find other MATCHes that precede c, which are also parallel with d1 and d2. There are three cases.
Case (a). When d1 = null, then also d2 = null since there cannot be another MATCH preceding c. Thus, P = ∅.
Since we computed all the possible parents in P, additional processing on P is needed to ensure that no pair of MATCHes in P are in series; if any are in series, delete the MATCH furthest from c. With the pv and getDPrnt, this task is simple. We simply check the direct parents (say dd1 and dd2) for each y ∈ P, and remove dd1 if dd1 ∈ P and remove dd2 if dd2 ∈ P.

Computing the LCS
Since our dag has a single source S (and all paths end at E), the longest path between S and E, i.e. the LCS, is computed by giving all edges a weight of −1 and finding the shortest path from S to E via a topological sort [32].

Complexity analysis
Our LCS algorithm: (i) finds the length-1 CSSs, (ii) computes the DAG on the CSSs, and (iii) reports the longest DAG path. Here, we analyze the overall time complexity.

Step (i)
First, we find (and store in A) the η length-1 CSSs in O(max{n + m, η}) time by Lemma 2.

Step (ii)
We then construct the DAG dag on these a ∈ A with constructDAG. In constructDAG, we initially compute the newly proposed PREV pv data structure in O(nm) time by Theorem 5. After constructing pv, the computeDAG iterates through each a ∈ A and creates an incoming edge between the parents of a and a. So, computeDAG executes in time O(max{nm, η × t getPrnts }), where t getPrnts is the time of getPrnts. The getPrnts running time is in O((i2 − i1) + (j2 − j1)), with respect to the local variables i1, i2, j1, and j2. However, it may be the case that i1 = j1 = 1, i2 = n, and j2 = m, and so O(n + m) time is required by getPrnts. Below we formalize the worstcase result and the case for average strings from a uniform distribution.

Step (iii)
We find the LCS with a topological sort in time linear to the dag size [32], which cannot require more time than that needed to build the dag (see Theorem 10).

Summary
Overall, (i) and (iii) do not add to the complexity of (ii). Given the above, the overall running time is as follows.
Theorem 11. The LCS between the n-length S 1 and the m-length S 2 can be computed in O(max{nm, η × max{n, m}}) time in the worst-case and O(max{nm, η × | |}) on average.

Compressing resequencing data
When data is released, modified, and re-released over a period of time, a large amount of commonality exists between these releases. Rather than maintaining all uncompressed versions of the data, it is possible to keep one uncompressed version, say D, and compress all future versions D i with respect to D. We refer to D i as the target and D as the reference. This idea is used to compress resequencing data in [20,21], primarily using the LCS. The LCS, however, has two core problems with respect to compression. For very similar sequences, the LCS computation time is almost quadratic, or worse, potentially leading to long compression time. Secondly, the LCS may not always lead to the best compression, especially when some CSS components are very short.
Rather than focusing on the LCS, we consider the maximal CSSs that make up the common subsequences. To intelligently choose which of the CSSs are likely to lead to improved compression, we use the longest previous factor (LPF), an important data structure in text compression [33]. Consider compressing the target T with respect to the reference R; let Z = R • T. Suppose we choose exactly |T| maximal-length CSSs, specifically, for These ks are computed in the LPF data structure on Z at LPF[i] = k and the position of this CSS is at POS[i] = h [29]. (Note that LPF and POS are constructed in linear time [29][30][31].) The requirement that h < i suits dictionary compression and compressing resequencing data because the CSS beginning at i is compressed by referencing the same CSS at h, occurring earlier in target T or anywhere in the reference R. Our idea is to use the LPF and POS to represent or encode CSSs that make up the target T with tuples. We will then compress these tuples with standard compression schemes.

Our compression scheme
We now propose a reference-based compression scheme which scans the LPF and POS on Z in a left-to-right fashion to compress T with respect to R. This scheme is similar to the LZ factorization [29], but differs in how we will encode the CSSs. Our contribution here is (1) using two files to compress T, (2) only encoding CSSs with length at least k, and (3) further compressing the resulting files with standard compression schemes.
Initially, the two output files, triples and symbols, are empty. Let i = |R| + 1. The resulting files triples and symbols are binary sequences that can be further compressed with standard compression schemes (so, decompression will start by first reversing this process). The purpose of the k and the two files (one with byte symbols and one with long triples) is to introduce flexibility into the system and encode CSSs with triples (12 bytes) only when beneficial and otherwise, encode a symbol with a byte. For convenience, our implementation encodes each symbol with a byte, but we acknowledge that it is possible to work at the bit-level for small alphabets.
The decompression is also a left-to-right scan. Let i = 1 and point to the beginning of triples and symbols.
( †) Consider the current long word w 1 in triples. According to the triple encoding, this will be the position  of the CSS in T. If i = w 1 , then we pick up the next two long words w 2 and w 3 in triples. We now know We next will consider i = i + w 3 . Else i = w 1 , so we pick up the next char c in symbols since T[ i] = c; we next consider i++. If i ≤ |T|, go to ( †).
The compression and decompression algorithms are detailed in Algorithms 4 and 5, respectively.

Results and discussion
We implemented the previously described compression scheme, selected and fixed parameter k, and ran our program to compress various DNA corpora. In this section, we describe the selection of k and our final results.

Choosing parameter k
Recall that the parameter k is a type of threshold used by our compression scheme to determine whether it is more beneficial to encode a symbol verbatim (that is, 1 byte) or encode a CSS as a triple (that is, 12 bytes). Specifically, our compression algorithm works on the LPF ( Obviously, it is better to encode a length-(l = 1) CSS with a 1-byte symbol, rather than a 12-byte triple. It is clearly the case that for any CSS length 1 ≤ l < 12, it is better to encode the first symbol with 1-byte and take a chance that the next CSS to the right will be significantly  larger. Why can we afford to take this chance? One LPF property, which also allows for an efficient construction of the data structure (see [29]), is that Clearly, we want to encode most CSSs as triples to take advantage of the concise triple representation. Now, the question becomes: how large should we set k, such that we can afford to take a risk passing up length-(l < k) CSSs in hopes of finding even larger CSSs better suited as triples?
For this paper, we decided to select k by studying the impact of the parameter on our compressed results for the Arabidopsis thaliana genome, using target TAIR9 and reference TAIR8. The compression results for various k are shown in Fig. 2; since chromosome 4 does not compress as well as the others, we show it separately in Fig. 3 for improved visualization. For very small k < 12, we have a result that basically encodes with triples only; when k = 1, we are exclusively encoding CSSs as triples. We see that when k is roughly between 12 and 35, we are encouraging the algorithm to pass up encoding smaller CSSs as triples, which leads to the best compression result. The results stay competitive until say k ≥ 100, where the algorithm becomes too optimistic and passes up the opportunity to encode smaller CSSs as triples in hopes that larger CSSs will exist. Further, we see from Fig. 4 that as k becomes large, it indeed becomes very expensive to pass up encoding these CSSs as triples. Also, we see from Fig. 5 that beyond say k = 20, there is minimal compression savings. Thus, we want to balance the expensive symbols files with the space-savings from the triples files.
In Table 1, we show the best compression results and k for the Arabidopsis thaliana genome. Unless otherwise specified, our experiments below fix parameter k as 31, since it is the optimal k common to 4-of-5 of the Arabidopsis thaliana chromosomes and gives a competitive result for the remaining chromosome. This result follows intuition because k should be at least 11 and not too large, so that we can consider CSSs that are worthy of encoding.

Compression results
Like [20,21], we compress the Arabidopsis thaliana genome chromosomes in TAIR9 (target) with respect to TAIR8 (reference). In Table 2, we display the compression results. We see that all of our results are competitive with the GRS and GReEn systems, except for chromosome 4, which has the smallest average CSS length of about 326K. Nonetheless, we are able to further compress our results using compression schemes from 7-zip, with L and P respectively representing lzma2 and ppmd, to achieve even better compression.
In Table 3, we show results for compressing the genome Oryza sativa using the target TIGR6.0 and reference TIGR5.0. After compressing our algorithm's output with lzma2 or ppmd, our results are better than both GRS   [20] and GReEn [21]. Note that for each of the chromosomes 6, 9, and 12, our algorithm's output is 12 bytes, better than that of GRS [20] (14 bytes) and GReEn [21] (482 bytes, 366 bytes, and 429 bytes respectively). When we compress our result with lzma2 or ppmd, the result is bloated since more bytes are needed. So, we can further improve the overall result by not compressing chromosomes 6, 9, and 12, and further, selecting the best such compression scheme for each individual chromosome. We acknowledge that additional bits would need to be encoded to determine which compression scheme was selected.
In Table 4, we show the compression results for the Homo sapiens genome, using KOREF_20090224 as the target and KOREF_20090131 as the reference. After compressing our computed symbols and triples files with lzma2, we see that most all of our results are better than GRS and GReEn. Recall in previous experiments that sometimes secondary compression with 7-zip does not improve the initial compression achieved by our proposed algorithm. For this genome, we exercise the flexibility of our compression framework. In Table 4, (*) indicates that the M chromosome was not further compressed with lzma2 due to the aforementioned reason. To indicate that M was not compressed, we will simply encode a length-25 bitstring (say 4-byte) header to specify whether or not the lzma2 was applied. There is no need to encode k in the header since it is a fixed value. Thus, the overall compressed files require 15,460,478 bytes, which is only slightly better than GRS and GReEn.
To improve this result, we exploit the difference between the Homo sapiens genome and those discussed earlier. That is, the Homo sapiens genome uses the extended alphabet {A, C, G, K, M, R, S, T, W, Y, a, c, g, k, m, n, r, s, t, w, y}. The observation is that, the alphabet size decreases roughly in half by converting to one character-case. Such a significant reduction in alphabet size will yield more significant redundancies identified by our compression algorithm. Our new decomposition method will decompose each chromosome into two parts: (1) the payload (ρ), representing the chromosome in one character-case, and (2) the character-case bitstring (α), in which each bit records whether the corresponding position in the target was an upper-case character. Next, we use our previously proposed algorithm to compress ρ into C ρ and α into C α . Table 5 shows compression via decomposition for the Homo sapiens genome. Note that the |C ρ |, i.e. compressed payload, column corresponds to the results reported in our initial work [1]. We observe that in various scenarios, the character-case of the alphabet