 Research
 Open access
 Published:
Sparse Tensor Decomposition for Haplotype Assembly of Diploids and Polyploids
BMC Genomics volume 19, Article number: 191 (2018)
Abstract
Background
Haplotype assembly is the task of reconstructing haplotypes of an individual from a mixture of sequenced chromosome fragments. Haplotype information enables studies of the effects of genetic variations on an organism’s phenotype. Most of the mathematical formulations of haplotype assembly are known to be NPhard and haplotype assembly becomes even more challenging as the sequencing technology advances and the length of the pairedend reads and inserts increases. Assembly of haplotypes polyploid organisms is considerably more difficult than in the case of diploids. Hence, scalable and accurate schemes with provable performance are desired for haplotype assembly of both diploid and polyploid organisms.
Results
We propose a framework that formulates haplotype assembly from sequencing data as a sparse tensor decomposition. We cast the problem as that of decomposing a tensor having special structural constraints and missing a large fraction of its entries into a product of two factors, U and \(\underline {\mathbf {V}}\); tensor \(\underline {\mathbf {V}}\) reveals haplotype information while U is a sparse matrix encoding the origin of erroneous sequencing reads. An algorithm, AltHap, which reconstructs haplotypes of either diploid or polyploid organisms by iteratively solving this decomposition problem is proposed. The performance and convergence properties of AltHap are theoretically analyzed and, in doing so, guarantees on the achievable minimum error correction scores and correct phasing rate are established. The developed framework is applicable to diploid, biallelic and polyallelic polyploid species. The code for AltHap is freely available from https://github.com/realabolfazl/AltHap.
Conclusion
AltHap was tested in a number of different scenarios and was shown to compare favorably to stateoftheart methods in applications to haplotype assembly of diploids, and significantly outperforms existing techniques when applied to haplotype assembly of polyploids.
Background
Fast and accurate DNA sequencing has enabled unprecedented studies of genetic variations and their effect on human health and medical treatments. Complete information about variations in an individual’s genome is given by haplotypes, the ordered lists of single nucleotide polymorphisms (SNPs) on the individual’s chromosomes [1]. Haplotype information is of fundamental importance for a wide range of applications. For instance, when the corresponding genes on a homologous pair of chromosomes contain multiple variants, they could exhibit different gene expression patterns. In humans, this may affect an individual’s susceptibility to diseases and response to therapeutic drugs, and hence suggest directions for medical and pharmaceutical research [2]. Haplotype information also enables whole genome association studies that focus on the socalled tag SNPs [3], representative SNPs in a region of the genome characterized by strong correlation between alleles (i.e., by high linkage disequilibrium). Moreover, haplotype sequences can be used to infer recombination patterns and identify genes under positive selection [4]. In addition to the SNPs and minor structural variations found in a healthy individual’s genome, complex chromosomal aberrations such as translocations and nonreciprocal structural changes – including aneuploidy – are present in cancer cells. Cancer haplotype assembly enables identification of “driver” mutations and thus helps to understanding the mechanisms behind the disease and discovery of its genetic signatures.
Haplotype assembly from short reads obtained by highthroughput DNA sequencing requires partitioning (either directly or indirectly) the reads into K clusters (K=2 for diploids, K=3 for triploids, etc.), each collecting the reads corresponding to one of the chromosomes. If the reads were free of sequencing errors, this task would be straightforward. However, sequencing is erroneous – stateoftheart platforms have error rates on the order of 10^{−3}−10^{−2}. This leads to ambiguities regarding the origin of a read and therefore renders haplotype assembly challenging. For this reason, the vast majority of haplotype assembly techniques attempts to remove the aforementioned ambiguities by either discarding or altering sequencing data; this has led to the minimum fragment removal, minimum SNP removal [5], maximum fragments cut [6], and minimum error correction formulations of the assembly problem [7]. Most of the recent haplotype assembly methods (see, e.g., [8–12]) focus on the minimum error correction (MEC) formulation where the goal is to find the smallest number of nucleotides in reads that need to be changed so that any read partitioning ambiguities would be resolved. It has been shown that finding optimal solution to the MEC formulation of the haplotype assembly problem is NPhard [5, 12, 13]. In [14], the authors used a branchandbound scheme to minimize the MEC objective over the space of reads; to reduce the search space, they relied on a bound on the objective obtained by a random partition of the reads. Unfortunately, exponential growth of the complexity of this scheme makes it computationally infeasible even for moderate haplotype lengths. Integer linear programming techniques have been applied to haplotype assembly in [15], but the approach there fails at computationally difficult instances of the problem. More recently, fixed parameter tractable (FPT) algorithms with runtimes exponential in the number of variants per read [16, 17] were proposed; these methods are wellsuited for short reads but become infeasible for the long ones. A dynamic programming scheme for haplotype assembly of diploids proposed in [18] is also exponential in the length of the longest read. A probabilistic dynamic programming algorithm that optimizes a likelihood function generalizing the MEC objective is developed in [10]; this method is characterized by high accuracy but is significantly slower than the previous heuristics. Authors in [9, 11] aim to process long reads by developing algorithms for the exact optimization of weighted variants of the MEC score that scale well with read length but are exponential in the sequencing coverage. These methods, along with ProbHap [10], struggle to remain accurate and practically feasible at high coverages (e.g., higher than 12 [10]).
The computational challenges of optimizing MEC score has motivated several polynomial time heuristics. In a pioneering work [19], a greedy algorithm seeking the most likely haplotypes was used to assemble haplotypes of the first complete diploid individual genome obtained via highthroughput sequencing. To compute posterior joint probabilities of consecutive SNPs, Bayesian methods relying on MCMC and Gibbs sampling schemes were proposed in [20] and [21], respectively; unfortunately, slow convergence of Markov chains that these schemes rely on limits their practical feasibility. Following an observation that haplotype assembly can be interpreted as a clustering problem, a maxcut formulation was proposed in [22]; an efficient algorithm (HapCUT, recently upgraded to HapCUT2 [23]) that solves it and significantly outperforms the method in [19] was developed and has widely been used. A flowgraph based approach in [24], HapCompass, reexamined fragment removal strategy and demonstrated superior performance over HapCUT. Other recent diploid haplotype assembly methods include a greedy maxcut approach in [25], convex optimization program for minimizing the MEC score in [26], a communicationtheoretic interpretation of the problem solved via belief propagation (BP) in [27], and methods that use external reference panels such as 1000 Genomes to improve accuracy of haplotype assembly in [28, 29]. Note that deep sequencing coverage provided by stateoftheart highthroughput sequencing platforms and the emergence of very long insert sizes in recent technologies (e.g., fosmid [25]) may enable assembly of extremely long haplotype blocks but also impose significant computational burden on the methods above.
Increased affordability, capability to provide deep coverage, and longer sequencing read lengths also enabled studies of genetic variations of polyploid organisms. However, haplotype assembly for polyploid genomes is considerably more challenging than that for diploids; to illustrate this, note that for a polyploid genome with k haplotype sequences of length m, under the allheterozygous assumption there are (k−1)^{m} different genotypes and at least 2^{(m−1)}(k−1)^{m} different haplotype phasings. In part for this reason relatively fewer methods for solving the haplotype assembly problems in polyploids have been developed. In fact, with the exception of HapCompass [24], SDhaP [26] and BP [27], the above listed methods are restricted to diploid genomes. Other techniques capable of reconstructing haplotypes for both diploid and polyploid genomes include HapTree [30], a Bayesian method to find the maximum likelihood haplotype shown to be superior to HapCompass and SDhaP (see, e.g., [31] for a detailed comparison), HPoP [8], the stateoftheart dynamic programming method that significantly outperforms the schemes developed in [24, 26, 30] in terms of accuracy, memory consumption, and speed, and the recently proposed matrix factorization schemes in [32, 33].
In this paper, we propose a unified framework for haplotype assembly of diploid and polyploid genomes based on sparse tensor decomposition; the framework essentially solves a relaxed version of the NPhard MEC formulation of the haplotype assembly problem. In particular, read fragments are organized in a sparse binary tensor which can be thought of as being obtained by multiplying a matrix that contains information about the origin of erroneous sequencing reads and a tensor that contains haplotype information of an organism. The problem then is recast as that of decomposing a tensor having special structural constraints and missing a large fraction of its entries. Based on a modified gradient descent method and after unfolding the observed and haplotype information bearing tensors, an iterative procedure for finding the decomposition is proposed. The algorithm exploits underlying structural properties of the factors to perform decomposition at a low computational cost. In addition, we analyze the performance and convergence properties of the proposed algorithm and determine bounds on the minimum error correction (MEC) scores and correct phasing rate (CPR) – also referred to as reconstruction rate – that the algorithm achieves for a given sequencing coverage and data error rate. To the best of our knowledge, this is the first polynomial time approximation algorithm for haplotype assembly of diploids and polyploids having explicit theoretical guarantees for its achievable MEC score and CPR. The proposed algorithm, referred to as AltHap, is tested in applications to haplotype assembly for both diploid and polyploid genomes (synthetic and real data) and compared with several stateoftheart methods. Our extensive experiments reveal that AltHap outperforms the competing techniques in terms of accuracy, running time, or both. It should be noted that while stateoftheart haplotype assembly methods for polyploids assume haplotypes may only have biallelic sites, AltHap is capable of reconstructing polyallelic haplotypes which are common in many plants and some animals, are of particular importance for applications such as crop cultivation [34], and may help in reconstruction of viral quasispecies [35]. Moreover, while many stateoftheart haplotype assembly methods are computationally intensive (e.g., [10, 15]), our extensive numerical experiments demonstrate efficacy of AltHap in a variety of practical settings.
Methods
Problem formulation
We briefly summarize notation used in the paper. Bold capital letters refer to matrices and bold lowercase letters represent vectors. Tensors are denoted by underlined bold capital letters, e.g., \(\underline {\mathbf {M}}\). M_{::1} and \(\overline {\mathbf {M}}\) denote the frontal slice and the mode1 unfolding of a thirdorder tensor \(\underline {\mathbf {M}}\), respectively. For a positive integer n, [n] denotes the set {1…,n}. The condition number of rankk matrix M is defined as κ=σ_{1}/σ_{ k } where σ_{1}≥⋯≥σ_{ k }>0 are singular values of M. SVD_{ k }(M) denotes the rank k approximation (compact SVD) of M computed by power iteration method [36, 37].
Let \({\mathcal {H}}=\{\mathbf {h}_{1},\dots,\mathbf {h}_{k}\}\) denote the set of haplotype sequences of a kploid organism, and let R be an n×m SNP fragment matrix where n denotes the number of sequencing reads and m is the length of haplotype sequences. R is an incomplete matrix that can be thought of as being obtained by sampling, with errors, matrix M that consists of n rows; each row of M is a sequence randomly selected from among k haplotype sequences. Since each SNP is one of four possible nucleotides, we use the alphabet \({\mathcal {A}}=\{(1,0,0,0),(0,1,0,0),(0,0,1,0),(0,0,0,1)\}\) to describe the information in the haplotype sequences; the mapping between nucleotides and alphabet components follows arbitrary convention. The reads can now be organized into an n×m×4 SNP fragment tensor which we denote by \(\underline {\mathbf {R}}\). The (i,j,:) fiber of \(\underline {\mathbf {R}}\), i.e., a onedimensional slice obtained by fixing the first and second indices of the tensor, represents the value of the j^{th} SNP in the i^{th} read. Let Ω denote the set of informative fibers of \(\underline {\mathbf {R}}\), i.e., the set of (i,j,:) such that the i^{th} read covers the j^{th} SNP. Define an operator \({\mathcal {P}}_{\Omega }(.)\) as
\({\mathcal {P}}_{\Omega }(\underline {\mathbf {R}})\) is a tensor obtained by sampling, with errors, tensor \(\underline {\mathbf {M}} \in {\mathcal {A}}^{n \times m}\) having n copies of k encoded haplotype sequences as its horizontal slices. More specifically, we can write \(\underline {\mathbf {M}}=\mathbf {U}\underline {\mathbf {V}}^{\top }\), where \(\underline {\mathbf {V}} \in {\mathcal {A}}^{m \times k}\) contains haplotype information, i.e., the j^{th} vertical slice of \(\underline {\mathbf {V}}\), V_{:j:}, is the encoded sequence of the j^{th} haplotype, and U∈{0,1}^{n×k} is a matrix that assigns each of n horizontal slices of \(\underline {\mathbf {M}}\) to one of k haplotype sequences, i.e., the i^{th} row of U, u_{ i }, is an indicator of the origin of the i^{th} read. Let Φ={e_{1},…,e_{ k }}, where \(\mathbf {e}_{l} \in \mathbb {R}^{k}\) is the l^{th} standard basis vector having 1 in the l^{th} position and 0 elsewhere. The rows of U are standard unit basis vectors in \(\mathbb {R}^{k}\), i.e., u_{ i }∈Φ, ∀i∈[n]. This representation is illustrated in Fig. 1 where the (1,1,:) fiber of \(\underline {\mathbf {V}}\) specified with dashed lines is mapped to the (1,1,:) fiber of \(\underline {\mathbf {M}}\) which in turn implies that in the example described in Fig. 1 we have u_{1}=e_{1}.
DNA sequencing is erroneous and hence we assume a model where the informative fibers in \(\underline {\mathbf {R}}\) are perturbed versions of the corresponding fibers in \(\underline {\mathbf {M}}\) with data error rate p_{ e }, i.e., if the (i,j,:)∈Ω fiber in \(\underline {\mathbf {M}}\) takes value \(\mathbf {e}_{l} \in {\mathcal {A}}\), R_{ij:} with probability 1−p_{ e } equals e_{ l } and with probability p_{ e } takes one of the other three possibilities. Thus, the observed SNP fragment tensor can be modeled as \(\underline {\mathbf {R}}={\mathcal {P}}_{\Omega }(\underline {\mathbf {M}}+\underline {\mathbf {N}})\) where \(\underline {\mathbf {N}}\) is an additive noise tensor defined as
where the notation \({\mathcal {U}}({\mathcal {A}}\backslash \{\mathbf {M}_{ij:}\})\) denotes uniform selection of a vector from \({\mathcal {A}}\backslash \{\mathbf {M}_{ij:}\}\). The goal of haplotype assembly can now be formulated as follows: Given the SNP fragment tensor\(\underline {\mathbf {R}}\), find the tensor of haplotype sequences\(\underline {\mathbf {V}}\)that minimizes the MEC score.
Next, we formalize the MEC score as well as the correct phasing rate, also known as reconstruction rate, the two metrics that are used to characterize performance of haplotype assembly schemes (see, e.g., [15, 18, 38, 39]). For two alleles a_{1}, \(\mathbf {a}_{2} \in {\mathcal {A}}\cup \{\mathbf {0}\}\), we define a dissimilarity function d(a_{1},a_{2}) as
The MEC score is the smallest number of fibers in \(\underline {\mathbf {R}}\) that need to be altered so that the resulting modified data is consistent with the reconstructed haplotype \(\underline {\mathbf {V}}\), i.e.,
The correct phasing rate (CPR), also referred to as the reconstruction rate, can conveniently be written using the dissimilarity function d(.,.). Let \(\underline {\mathbf {V}}^{t}\) denote the tensor of true haplotype sequences. Then
where \(\mathcal {M}\) is a onetoone mapping from lateral slices of \(\underline {\mathbf {V}}\) to those of \(\underline {\mathbf {V}}^{t}\), i.e., a onetoone mapping from the set of reconstructed haplotypes to the set of true haplotypes.
We now describe our proposed relaxation of the MEC formulation of the haplotype assembly problem. Let p_{ i }∈[k], ∀i∈[n] be defined as \(p_{i}= \arg \min _{p} \sum _{j=1}^{m} d\left (\mathbf {R}_{ij:},{\mathbf {V}}_{jp:}\right)\). Notice that for any j such that d(R_{ij:},V_{jp:})=1, \(\\mathbf {R}_{ij:}{\mathbf {V}}_{jp:}\_{2}^{2}=2\). Therefore, by denoting \(\Omega =\cup _{i=1}^{n}\Omega _{i}\) where Ω_{ i } the set of informative fibers for the i^{th} read we obtain
where (a) follows from the definition of the Frobenius norm and vec(.) in (b) denotes the vectorization of its argument. Let e_{ p } be the p^{th} standard unit vector ∀p∈[k]. It is straightforward to observe that the last equality in (6) can equivalently be written as
where \(\overline {\mathbf {V}}\) is the mode1 unfolding of the tensor \(\underline {\mathbf {V}}\). Hence,
Let U∈{0,1}^{n×k} be the matrix such that for its i^{th} row it holds that \(\mathbf {u}_{i} = \mathbf {e}_{p_{i}}\phantom {\dot {i}\!}\). In addition, notice that vec(R_{i::}) is the i^{th} row of \(\overline {\mathbf {R}}\). Therefore, from the definition of the Frobenius norm and the fact that \({\mathcal {P}}_{\Omega }(\overline {\mathbf {R}})=\overline {\mathbf {R}}\) we obtain
The optimization problem in (7) is NPhard since the entries of \(\overline {\mathbf {V}}\) are binary and the objective function is nonconvex. Relaxing the binary constraint to \(\overline {\mathbf {V}}_{i,j} \in {\mathcal {C}}\), ∀i∈[4m],∀j∈[k], where \({\mathcal {C}}=[0,1]\), results in the following relaxation of the MEC formulation,
The new formulation can be summarized as follows. We start by finding the socalled mode1 unfolding of tensors \(\underline {\mathbf {M}}\) and \(\underline {\mathbf {V}}\) and denote the decomposition \(\overline {\mathbf {M}}=\mathbf {U}\overline {\mathbf {V}}^{\top }\), as illustrated in Fig. 2. As implied by the figure, after unfolding, the entries of the (1,1,:) fiber are mapped to four blocks of \(\overline {\mathbf {M}}\) and \(\overline {\mathbf {V}}\) that correspond to the frontal slices of tensors \(\underline {\mathbf {M}}\) and \(\underline {\mathbf {V}}\), respectively. Then, to determine the haplotype sequence that minimizes the MEC score, one needs to solve (8) and find the optimal tensor decomposition.
The AltHap algorithm
Although the objective function in (8), i.e.,
is convex in each of the factors when the other factor is fixed, \(f(\mathbf {U},\overline {\mathbf {V}})\) is generally nonconvex. To facilitate computationally efficient search for the solution of (8), we rely on a modified gradient search algorithm which exploits the special structures of U and \(\overline {\mathbf {V}}\) and iteratively updates the estimates \((\mathbf {U}_{t},\overline {\mathbf {V}}_{t})\) starting from some initial point \((\mathbf {U}_{0} \overline {\mathbf {V}}_{0})\). More specifically, given the current estimates \((\mathbf {U}_{t},\overline {\mathbf {V}}_{t})\), the update rules are
where \({{\nabla f}} \left (\overline {\mathbf {V}}_{t}\right) = \left ({\mathcal {P}}_{\Omega }\left (\overline {\mathbf {R}}\mathbf {U}_{t+1}\overline {\mathbf {V}}_{t}^{\top }\right)\right)^{\top }\mathbf {U}_{t+1}\) denotes the partial derivative of \(f\left (\mathbf {U},\overline {\mathbf {V}}\right)\) evaluated at \(\left (\mathbf {U}_{t+1},\overline {\mathbf {V}}_{t}\right)\), α is a judiciously chosen step size, and \({{\Pi _{\mathcal {C}}}}\) denotes the projection operator onto \({\mathcal {C}}\). Notice that the optimization in (9) is done by exhaustively searching over k vectors in Φ. Since the number of haplotypes k is relatively small, the complexity of the exhaustive search (9) is low. The proposed scheme is formalized as Algorithm 1.
Note that AltHap differs from a previously proposed SCGD algorithm in [32] as follows: (i) AltHap’s novel representation of haplotypes and sequencing reads using binary tensors provides a unified framework for haplotype assembly of diploids as well as biallelic and polyallelic polyploids. The method in [32] is not capable of performing haplotype assembly of polyallelic polyploid genomes. (ii) Unlike [32], AltHap exploits the fact that \(\underline {\mathbf {V}}\) is composed of binary entries by imposing the constraint \(\overline {\mathbf {V}}_{i,j} \in {\mathcal {C}}\) in the MEC relaxation in (8). As our results in Section 5 demonstrate, this leads to significant performance improvements of AltHap over SCGD in a variety of settings. (iii) Lastly, in Section 4 we provide analysis of the global convergence of AltHap and derive explicit analytical bounds on its achievable performance. Such performance guarantees do not exist for the method in [32].
Convergence analysis of AltHap
In this section, we analyze the convergence properties of AltHap and provide performance guarantees in different scenarios.
In the Additional file 1 we show that, a judicious choice of the step size α according to
where C∈(0,2) is a constant, guarantees that the value of the objective function in (8) decreases as one alternates between (9) and (10), which in turn implies that AltHap converges. The key observation that leads to this result is that \(f(\mathbf {U},\overline {\mathbf {V}})\) is a convex function in each of the factor matrices and that \({\mathcal {C}}=[0,1]\) is a convex set; hence the projection \({{\Pi _{\mathcal {C}}}}\) in (10) leads to a reduction of \(f\left (\mathbf {U}_{t},\overline {\mathbf {V}}_{t}\right)\) in each iteration t.
It is important however to determine the conditions under which the stationary point of AltHap coincides with the global optima of (8). To this end, we first provide the definition of incoherence of matrices [40].
Definition 1
A rankk matrix\(\mathbf {M} \in \mathbb {R}^{n \times m}\)with singular value decomposition\(\mathbf {M}=\hat {\mathbf {U}}\mathbf {\Sigma }\hat {\mathbf {V}}^{\top }\)is incoherent with parameter\(1\leq \mu \leq \frac {\max \{n,m\}}{k}\)if for every 1≤i≤n, 1≤j≤m
Let each fiber in MT be observed uniformly with probably p. Let C_{snp}=△mp denote the expected number of SNPs covered by each read, and C_{seq}=△np denote the expected coverage for each of the haplotype sequences. Theorem 1 built upon the results of [41–43] states that with an adequate number of covered SNPs, the solution found by AltHap reconstructs \(\overline {\mathbf {M}}\) up to an error term that stems from the existence of errors in sequencing reads.
Theorem 1
Assume \(\overline {\mathbf {M}}\) is μincoherent. Suppose the condition number of \(\overline {\mathbf {M}}\) is κ. Then there exist numerical constants C_{0},C_{1}>0 such that if Ω is uniformly generated at random and
with probability at least \(1\frac {1}{m^{3}}\), the solution \((\mathbf {U}^{*}, \overline {\mathbf {V}}^{*})\) found by AltHap satisfies
The proof of Theorem 1 relies on a coupled perturbation analysis to establish a certain type of local convexity of the objective function around the global optima. Thus, under (13) there is no other stationary point around the global optima and hence, starting from a good initial point, AltHap converges globally. We employ the initialization procedure suggested by [42] – summarized in the initialization step of Algorithm 1 – which is based on a low cost singular value decomposition of \(\overline {\mathbf {R}}\) using power method [36, 37] and with high probability lies in the described convexity region of \(f(\mathbf {U},\overline {\mathbf {V}})\).
Remark 1
Under the assumption of 1, the Condition \(C_{\text {snp}} > C_{0} \sqrt [3]{\mu ^{4}k^{14}\kappa ^{12} C_{\text {seq}}}\) specifies a lower bound on the expected number of covered SNPs, C_{snp}, that is required for the exact recovery of \(\overline {\mathbf {M}}\) in the idealistic errorfree scenario, i.e., for p_{ e }=0. With higher sequencing coverage, more SNPs are covered by the reads and hence C_{snp} required for accurate haplotype assembly scales with C_{seq} along with other parameters. Moreover, the term \(\frac {C_{1} \kappa ^{4} p_{e} k m}{2C_{\text {snp}}}\) on the right hand side of (14) is the bound on the error of the solution generated by AltHap which increases with the sequencing error rate p_{ e } and ploidy k, and decreases with C_{snp} and the number of reads n, as expected.
Remark 2
If \(\overline {\mathbf {M}}\) is wellconditioned, i.e., \(\overline {\mathbf {M}}\) is characterized by a small incoherence parameter μ and a small condition number κ, the recovery becomes easier; this is reflected in less strict sufficient condition (13) and improved achievable performance (14). In fact, as we verified in our simulation studies, by using the proposed framework for haplotype assembly, the parameters μ and κ associated with \(\overline {\mathbf {M}}\) are close to 1 (the ideal case). Theorem 2 provides theoretical bounds on the expected MEC scores and CPR achieved by AltHap. (See Additional file 1 for the proof).
Theorem 2
Under the conditions of Theorem 1, with probability at least \(1\frac {1}{m^{3}}\) it holds that
Moreover, if the reads sample haplotype sequences uniformly, with probability at least \(1\frac {1}{m^{3}}\) it holds that
Remark 3
The bound established in (15) suggests that the expected MEC increases with the length of the haplotype sequences, sequencing error, number of haplotype sequences, and sequencing coverage. A higher sequencing coverage results in a larger fragment data which in turn leads to higher MEC scores.
Remark 4
As intuitively expected, the bound (16) suggests that AltHap’s achievable expected CPR improves with the number of sequencing reads and the SNP coverage; on the other hand, the CPR deteriorates at higher data error rates. Finally, assuming the same sequencing parameters, (16) implies that reconstruction of polyploid haplotypes is more challenging than that of diploids.
Results and discussion
We evaluated the performance of the proposed method on both experimental and simulated data, as described next. AltHap was implemented in Python and MATLAB, and the simulations were conducted on a single core Intel Xeon E52690 v3 (Haswell) with 2.6 GHz and 64 GB DDR42133 RAM. The benchmarking algorithms include Belief Propagation (BP) [27], a communicationinspired method capable of performing haplotype assembly of diploid and biallelic polyploid species, HapTree [30], integer linear programming (ILP) technique [15], SCGD [32], and HPoP [8], the stateoftheart dynamic programming algorithm for haplotype assembly of diploid and biallelic polyploid species shown to be superior to HapTree [30], HapCompass [24], and SDhaP [26] in terms of both accuracy and speed [8, 31]. Following the prior works on haplotype assembly (see, e.g., [15, 18, 38, 39]) we use MEC score and CPR to assess the quality of the reconstructed haplotypes. For clarity, in the tables we report the CPR percentage, i.e., CPR × 100.
Experimental data
We first tested performance of AltHap in an application to haplotype reconstruction of a data set from the 1000 Genomes Project – in particular, the sample NA12878 sequenced at high coverage using the 454 sequencing platform. In this work, we take the triophased variant calls from the GATK resource bundle [44] as the true haplotype sequences. We compare the MEC score, CPR, and running time achieved by AltHap to those of HPoP, BP, HapTree, SCGD and ILP. All the algorithms used in the benchmarking study were executed with their default settings. The results are given in Table 1. As seen there, among the considered algorithms AltHap achieves the highest CPR for majority of the chromosomes (nine), followed by HPoP and ILP (five each). As expected, ILP achieves the lowest MEC scores among all the methods but this comes at a computational cost much higher than that of AltHap. Notice that lower MEC score does not necessarily imply better CPR. MEC is the error evaluated on observed SNPs positions, i.e., the training data points, while CPR is related to the generalization error that is calculated on unobserved SNPs positions, i.e., the test data points. Since the sequencing reads are erroneous, an algorithm might overfit while trying to minimize the MEC score.
Fosmid poolbased sequencing provides very long fragments and is characterized by much higher ratio of the number of SNPs to the number of reads than the standard data sets generated by highthroughput sequencing platforms. We consider the fosmid sequence data for chromosomes of HapMap NA12878 and again take the triophased variant calls from the GATK resource bundle [44] as the true haplotype sequences. We compare the performance of AltHap to those of HPoP, BP, HapTree, SCGD and ILP and report the results in Table 2. As can be seen from Table 2, AltHap achieves the best CPR for most of the chromosomes (thirteen out of 22) followed by HPoP (four). As with the 1000 Genome Project Data, ILP achieves the best MEC scores but is much slower and significantly inferior to AltHap in terms of CPR. Note that since HapTree could not finish assembling haplotype of the 6^{th} chromosome in 48 hours, that result is missing from the table.
Simulated data: the diploid case
To further benchmark performance of the proposed scheme, we test it on the synthetic data from [39] often used to compare methods for haplotype assembly of diploids. These data sets emulate haplotype assembly under varied coverage, sequencing error rates and haplotype block lengths. We constrain our study to the assembly of haplotype blocks having length m=700 bp (the longest blocks in the data set). The results, averaged over 100 instances of the problem, are given in Table 3. As evident from this table, AltHap outperforms other algorithms for nearly all the combinations of data error rates and sequencing coverage and is also much faster than SCGD, ILP, BP and HapTree while being slightly slower than HPoP. Note that ILP could only finish assembling haplotype of two settings with p_{ e }=0.1 and coverages of 5 and 8, in 48 hours. Hence, the results for other settings are missing from the table.
Simulated data: the biallelic polyploid case
The performance of AltHap in applications to haplotype assembly for polyploids was tested using simulations; in particular, we studied how AltHap’s accuracy depends on coverage and sequencing error rate. The generated data sets consist of pairedend reads with long inserts that emulate the scenario where long connected haplotype blocks need to be assembled. We simulate sampling of the entire genome using pairedend reads and generate SNPs along the genome with probability 1 in 300. In other words, the distance between pairs of adjacent SNPs follows a geometric random variable with parameter \(p_{snp}=\frac {1}{300}\) (the SNP rate). To emulate a sequencing process capable of facilitating reconstruction of long haplotype blocks, we randomly generate pairedend reads of length 2×250 with average insert length of 10,000 bp and the standard deviation of 10%; sequencing errors are inserted using realistic error profiles [45] and genotyping is performed using a Bayesian approach [44]. At such read and insert lengths, the generated haplotype blocks are nearly fully connected. Each experiment is repeated 10 times. AltHap is compared with HPoP, BP and SCGD. We also tried to run HapTree. However, HapTree could not finish the simulations for the considered block size in 48 hours.
Table 4 compares the CPR, MEC score, and running times of AltHap with those of HPoP, BP and SCGD for biallelic triploid genomes with haplotype block lengths of m=1000 for several combinations of sequencing coverage and data error rates. As can be seen there, in terms of CPR AltHap outperforms all other methods in all the scenarios while in terms of MEC score it outperforms other methods in the vast majority of the scenarios. Note that unlike HPoP’s, the complexity of AltHap scales gracefully with coverage (i.e., although HPoP is very fast at low coverages, at the highest coverage AltHap becomes much faster than HPoP). As can be seen in Table 6, overall CPR score (MEC score) of all algorithms decreases (increases) as the probability of error increases. This is expected – and also reflected in our theoretical results – since with higher data error rate haplotype assembly becomes more challenging. Additionally, MEC scores increases with coverage since higher coverage implies more sequencing reads. Therefore, the total number of observed SNP positions increases which in turn results in higher MEC scores.
The results of tests conducted on simulated biallelic tetraploid genomes are summarized in Table 5, where we observe that AltHap outperforms the competing schemes in terms of both accuracy and running time. To investigate how the performance and complexity of AltHap vary with coverage and read length, in Table 6 we report its CPR, MEC, and runtimes obtained by simulating assembly of biallelic triploid haplotypes using paired end reads of length 2×250, 2×300, and 2×500 and coverage 10, 20 and 30 (block length is set to m=1000 and data error rate is p_{ e }=0.002). The results imply that AltHap’s runtime scales approximately linearly with both read length and coverage over the consider range of these two parameters. Additionally, as we see in Table 5, MEC score slightly increases with read length. The impact of read length in this matter is similar to that of sequencing coverage. longer sequencing reads provide more observed SNP positions and hence the MEC might increase, as also predicted by our theoretical results.
Simulated data: the polyallelic polyploid case
We further studied the performance of AltHap on triploid and tetraploid organisms having polyallelic sites and the results are summarized in Tables 7 and 8, respectively. Notice that none of the competing schemes are capable of handling polyallelic genomes. Evidently, AltHap is able to reconstruct underlying haplotype sequences with competitive performance at a low computational cost.
The results of these extensive simulations imply that, as expected, haplotype assembly becomes more challenging as the number of haplotype sequences (i.e., the ploidy) increases. Nevertheless, in all the conducted studies, AltHap consistently reconstructs haplotype sequences accurately and with practically feasible computational cost. In addition, the results of Tables 4 and 5 demonstrate that the computational time of AltHap grows significantly slower with coverage than the computational time of the competing schemes. In particular, for high coverages that are characteristic of highthroughput sequencing technologies, AltHap is the most efficient among the considered algorithm.
CPR lower bound
Finally, we use the results obtained by running AltHap on simulated biallelic triploid data (i.e., the results summarized in Table 4) to examine tightness of the theoretical bounds on the CPR stated in Theorem 2. In particular, theoretical bounds on CPR are compared to the CPRs empirically computed for various combinations of coverage and data error rates (averaged over 10 independent problem instances). In Fig. 3, the theoretical bound and experimental CPR results are shown as functions of the data error rate for coverage 15. We observe that the bound is reasonably close to the experimental results over the considered range of data error rates. In Fig. 4, the theoretical bound and experimental CPR results are plotted against sequencing coverage for the data error rate p_{ e }=0.002. This figure, too, implies that the theoretical CPR bound is relatively close to the experimental results. Notice that as Fig. 3 shows, the CPR score as well as the lower bound derived in Theorem 2 decrease when the sequencing error increases. On the other hand, Fig. 4 depicts that higher coverage improves AltHap’s CPR score, which again is reflected in our theoretical results.
Conclusion
In this paper, we developed a novel haplotype assembly framework for both diploid and polyploid organisms that relies on sparse tensor decomposition. The proposed algorithm, referred to as AltHap, exploits structural properties of the problem to efficiently find tensor factors and thus assemble haplotypes in an iterative fashion, alternating between two computationally tractable optimization tasks. If the algorithm starts the iterations from an appropriately selected initial point, AltHap converges to a stationary point which is with high probability in close proximity of the solution that is optimal in the MEC sense. In addition, we analyzed the performance and convergence properties of AltHap and found bounds on its achievable MEC score and the correct phasing rate. AltHap, unlike the majority of existing methods for haplotype assembly for polyploids, is capable of reconstructing haplotypes with polyallelic sites, making it useful in a number of applications involving plant genomes. Our extensive tests on real and simulated data demonstrate that AltHap compares favorably to competing methods in applications to haplotype assembly of diploids, and significantly outperforms existing techniques when applied to haplotype assembly of polyploids.
As part of the future work, it is of interest to extend the sparse tensor decomposition framework to viral quasispecies reconstruction and recovery of bacterial haplotypes from metagenomic data.
Abbreviations
 BP:

Belief propagation
 CPR:

Correct phasing rate
 FPT:

Fixed parameter tractable
 ILP:

Integer linear programming
 MCMC:

Markov chain Monte Carlo
 MEC:

Minimum error correction
 SNP:

Single nucleotide polymorphisms
References
Schwartz R, et al. Theory and algorithms for the haplotype assembly problem. Communin Info Syst. 2010; 10(1):23–38.
Clark AG. The role of haplotypes in candidate gene studies. Genet Epidemiol. 2004; 27(4):321–33.
Gibbs RA, Belmont JW, Hardenbol P, Willis TD, Yu F, et al. The international hapmap project. Nature. 2003; 426(6968):789–96.
Sabeti PC, Reich DE, Higgins JM, Levine HZP, Richter DJ, et al. Detecting recent positive selection in the human genome from haplotype structure. Nature. 2002; 419(6909):832–37.
Lancia G, Bafna V, Istrail S, Lippert R, Schwartz R. Snps problems, complexity, and algorithms, vol 1. In: European symposium on algorithms. Springer: 2001. p. 182–93.
Duitama J, Huebsch T, McEwen G, Suk E, Hoehe MR. Refhap: a reliable and fast algorithm for single individual haplotyping. In: Proceedings of the First ACM International Conference on Bioinformatics and Computational Biology. ACM: 2010. p. 160–9.
Lippert R, Schwartz R, Lancia G, Istrail S. Algorithmic strategies for the single nucleotide polymorphism haplotype assembly problem. Brief Bioinforma. 2002; 3(1):23–31.
Xie M, Wu Q, Wang J, Jiang T. HPoP and HPoPG: Heuristic partitioning algorithms for single individual haplotyping of polyploids. Bioinformatics; 32(24):3735–44.
Pirola Y, Zaccaria S, Dondi R, Klau GW, Pisanti N, Bonizzoni P. Hapcol: accurate and memoryefficient haplotype assembly from long reads. Bioinformatics. 2015; 32.11(2015):1610–7.
Kuleshov V. Probabilistic singleindividual haplotyping. Bioinformatics. 2014; 30(17):379–85.
Patterson M, Marschall T, Pisanti N, Van Iersel L, Stougie L, et al. Whatshap: Weighted haplotype assembly for futuregeneration sequencing reads. J Comput Biol. 2015; 22(6):498–509.
Bonizzoni P, Dondi R, Klau GW, Pirola Y, Pisanti N, Zaccaria S. On the minimum error correction problem for haplotype assembly in diploid and polyploid genomes. J Comput Biol. 2016; 23(9):718–36.
Cilibrasi R, Van Iersel L, Kelk S, Tromp J. On the complexity of several haplotyping problems. In: Algorithms in Bioinformatics. WABI 2005. Lecture Notes in Computer Science, vol 3692. Springer: 2005. p. 128–39.
Wang R, Wu L, Li Z, Zhang X. Haplotype reconstruction from snp fragments by minimum error correction. Bioinformatics. 2005; 21(10):2456–62.
Chen Z, Deng F, Wang L. Exact algorithms for haplotype assembly from wholegenome sequence data. Bioinformatics. 2013; 29(16):1938–45.
He D, Han B, Eskin E. Hapseq: an optimal algorithm for haplotype phasing with imputation using sequencing data. J Comput Biol. 2013; 20(2):80–92.
Bonizzoni P, Dondi R, Klau GW, Pirola Y, Pisanti N, Zaccaria S. On the fixed parameter tractability and approximability of the minimum error correction problem. In: Annual Symposium on Combinatorial Pattern Matching. Cham: Springer: 2015. p. 100–13.
He D, Choi A, Pipatsrisawat K, Darwiche A, Eskin E. Optimal algorithms for haplotype assembly from wholegenome sequence data. Bioinformatics. 2010; 26(12):183–90.
Levy S, Sutton G, Ng PC, Feuk L, Halpern AL, et al. The diploid genome sequence of an individual human. PLoS Biol. 2007; 5(10):254.
Bansal V, Halpern AL, Axelrod N, Bafna V. An MCMC algorithm for haplotype assembly from wholegenome sequence data. Genome Res. 2008; 18(8):1336–46.
Kim JH, Waterman MS, Li LM. Diploid genome reconstruction of ciona intestinalis and comparative analysis with ciona savignyi. Genome Res. 2007; 17(7):1101–10.
Bansal V, Bafna V. HapCUT: an efficient and accurate algorithm for the haplotype assembly problem. Bioinformatics. 2008; 24(16):153–9.
Edge P, Bafna V, Bansal V. Hapcut2: robust and accurate haplotype assembly for diverse sequencing technologies. Genome Res. 2017; 27(5):801–12.
Aguiar D, Istrail S. Hapcompass: a fast cycle basis algorithm for accurate haplotype assembly of sequence data. J Comput Biol. 2012; 19(6):577–90.
Duitama J, McEwen GK, Huebsch T, Palczewski S, Schulz S, et al. Fosmidbased whole genome haplotyping of a hapmap trio child: evaluation of single individual haplotyping techniques. Nucleic Acids Res. 2011;:1042.
Das S, Vikalo H. SDhaP: Haplotype assembly for diploids and polyploids via semidefinite programming. BMC Genomics. 2015; 16.1(2015):260.
Puljiz Z, Vikalo H. Decoding genetic variations: Communicationsinspired haplotype assembly. IEEE/ACM Trans Comput Biol Bioinform (TCBB). 2016; 13.3(2016):518–30.
Yang WY, Hormozdiari F, Wang Z, He D, Pasaniuc B, Eskin E. Leveraging reads that span multiple single nucleotide polymorphisms for haplotype inference from sequencing data. Bioinformatics. 2013; 29(18):2245.
Delaneau O, Marchini J. Integrating sequence and array data to create an improved 1000 genomes project haplotype reference panel. 2014; 5:3934.
Berger E, Yorukoglu D, Peng J, Berger B. Haptree: A novel bayesian framework for single individual polyplotyping using ngs data. PLoS Comput Biol. 2014; 10(3):e1003502.
Motazedi E, Finkers R, Maliepaard C, de Ridder D. Exploiting nextgeneration sequencing to solve the haplotyping puzzle in polyploids: a simulation study. Brief Bioinform. 2017. bbw126, https://doi.org/10.1093/bib/bbw126.
Cai C, Sanghavi S, Vikalo H. Structured lowrank matrix factorization for haplotype assembly. IEEE J Sel Top Sign Proc. 2016; 10(4):647–57.
Chaisson M, Mukherjee S, Kannan S, Eichler EE. Resolving multicopy duplications de novo using polyploid phasing. In: International Conference on Research in Computational Molecular Biology. Cham: Springer: 2017. p. 117–33.
RennyByfield S, Wendel JF. Doubling down on genomes: polyploidy and crop plants. Am J Bot. 2014; 101(10):1711–25.
Prosperi MCF, Salemi M. QuRe: Software for viral quasispecies reconstruction from nextgeneration sequencing data. Bioinformatics. 2012; 28(1):132–3.
Larsen RM. Lanczos bidiagonalization with partial reorthogonalization. DAIMI Rep Ser. 1998;27(537).
Baglama J, Reichel L. Augmented implicitly restarted lanczos bidiagonalization methods. SIAM J Scien Comput. 2005; 27(1):19–42.
Deng F, Cui W, Wang L. A highly accurate heuristic algorithm for the haplotype assembly problem. BMC Genomics. 2013; 14(2):2.
Geraci F. A comparison of several algorithms for the single individual snp haplotyping reconstruction problem. Bioinformatics. 2010; 26(18):2217–25.
Candès EJ, Recht B. Exact matrix completion via convex optimization. Found Comput Math. 2009; 9(6):717–72.
Keshavan RH, Montanari A, Oh S. Matrix completion from noisy entries. J Mach Learn Res. 2010; 11(Jul):2057–78.
Sun R, Luo ZQ. Guaranteed matrix completion via nonconvex factorization. IEEE Trans Info Theory. 2016; 62(11):6535–79.
Gunasekar S, Acharya A, Gaur N, Ghosh J. Noisy matrix completion using alternating minimization. In: Joint European Conference on Machine Learning and Knowledge Discovery in Databases. Berlin: Springer: 2013. p. 194–209.
DePristo MA, Banks E, Poplin R, Garimella KV, Maguire JR, et al. A framework for variation discovery and genotyping using nextgeneration dna sequencing data. Nature genetics. 2011; 43(5):491–8.
Das S, Vikalo H. Onlinecall: fast online parameter estimation and base calling for illumina’s nextgeneration sequencing. Bioinformatics. 2012; 28(13):1677–83.
Acknowledgements
We thank Somsubhra Barik for advice on experiments.
Funding
This work was funded by the National Science Foundation under grants CCF 1320273 and CCF 1618427. The publication costs of this article was funded by the National Science Foundation under grants CCF 1320273 and CCF 1618427.
Availability of data and materials
All data are available on request. The code for AltHap is freely available from https://github.com/realabolfazl/AltHap.
About this supplement
This article has been published as part of BMC Genomics Volume 19 Supplement 4, 2018: Selected original research articles from the Fourth International Workshop on Computational Network Biology: Modeling, Analysis, and Control (CNBMAC 2017): genomics. The full contents of the supplement are available online at https://bmcgenomics.biomedcentral.com/articles/supplements/volume19supplement4.
Author information
Authors and Affiliations
Contributions
Algorithms and experiments were designed by AH, BZ, and HV. Algorithm code was implemented and tested by AH and BZ. The manuscript was written by AH, BZ, and HV. All authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Ethics approval and consent to participate
Not applicable.
Consent for publication
Not applicable.
Competing interests
The authors declare that they have no competing interests.
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Additional file
Additional file 1
Supplement for “Sparse Tensor Decomposition for Haplotype Assembly of Diploids and Polyploids”. Additional file 1 provides details on derivation of the proposed step size, and derivation of MEC and CPR bounds. (PDF 210 kb)
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
About this article
Cite this article
Hashemi, A., Zhu, B. & Vikalo, H. Sparse Tensor Decomposition for Haplotype Assembly of Diploids and Polyploids. BMC Genomics 19 (Suppl 4), 191 (2018). https://doi.org/10.1186/s128640184551y
Published:
DOI: https://doi.org/10.1186/s128640184551y