Notation and outline
Let ∑ = {A, C, G, T} be the alphabet of nucleotides (BAYES HAMMER discards k-mers with uncertain bases denoted N). A k-mer is an element of ∑k, i.e., a string of k nucleotides. We denote the ith letter (nucleotide) of a k-mer x by x[i], indexing them from zero: 0 ≤ i ≤ k - 1. A subsequence of x corresponding to a set of indices I is denoted by x[I]. We use interval notation [i, j] for intervals of integers {i, i + 1,..., j} and further abbreviate x[i, j] = x [{i, i + 1,..., j}]; thus, x = x[0, k - 1]. Input reads are represented as a set of strings R ⊂ Σ* along with their quality values for each r ∈ R. We assume that q
r
[i] estimates the probability that there has been an error in position i of read r. Notice that in practice, the fastq file format [11] contains characters that encode probabilities on a logarithmic scale (in particular, products of probabilities used below correspond to sums of actual quality values).
Below we give an overview of BAYES HAMMER workflow (Figure 2) and refer to subsequent sections for further details. On Step (1), k-mers in the reads are counted, producing a triple statistics(x) = (count
x
, quality
x
, error
x
) for each k-mer x. Here, count
x
is the number of times x appears as a substring in the reads, quality
x
is its total quality expressed as a probability of sequencing error in x, and error
x
is a k-dimensional vector that contains products of error probabilities (sums of quality values) for individual nucleotides of x across all its occurrences in the reads. On Step (2), we find connected components of the Hamming graph constructed from this set of k-mers. On Step (3), the connected components become subject to Bayesian subclustering; as a result, for each k-mer we know the center of its subcluster. On Step (4), we filter subcluster centers according to their total quality and form a set of solid k-mers which is then iteratively expanded on Step (5) by mapping them back to the reads. Step (6) deals with reads correction by counting the majority vote of solid k-mers in each read. In the iterative version, if there has been a substantial amount of changes in the reads, we run the next iteration of error correction; otherwise, output the corrected reads. Below we describe specific algorithms employed in the BAYES HAMMER pipeline.
Algorithms
Step (1): computing k-mer statistics
To collect k-mer statistics, we use a straightforward hash map approach [12] that does not require storing instances of all k-mers in memory (as excessive amount of RAM might be needed otherwise). For a certain positive integer N (the number of auxiliary files), we use a hash function h: ∑k →ℤ
N
that maps k-mers over the alphabet Σ to integers from 0 to N - 1.
Algorithm 1 Count k-mers
for each k-mer x from the reads R: do
compute h(x) and write x to Fileh(x).
for i ∈ [0, N - 1]: do
sort File
i
with respect to the lexicographic order;
reading File
i
sequentially, compute statistics(s) for each k-mer s from File
i
.
Step (2): constructing connected components of Hamming graph
Step (2) is the essence of the HAMMER approach [8]. The Hamming distance between k-mers x, y ∈ ∑k is the number of nucleotides in which they differ:
For a set of k-mers X, the Hamming graph HG
τ
(X) is an undirected graph with the set of vertices X and edges corresponding to pairs of k-mers from X with Hamming distance at most τ, i.e., x, y ∈ X are connected by an edge in HG
τ
(X) iff d(x, y) ≤ τ (Figure 3). To construct HG
τ
(X) efficiently, we notice that if two k-mers are at Hamming distance at most τ, and we partition the set of indices [0,k - 1] into τ + 1 parts, then at least one part corresponds to the same subsequence in both k-mers. Below we assume with little loss of generality that τ + 1 divides k, i.e., k = σ (τ + 1) for some integer σ.
For a subset of indices I ⊆ [0, k - 1], we define a partial lexicographic ordering ≺
I
as follows: x ≺
I
y iff x[I] ≺ y[I], where ≺ is the lexicographic ordering on Σ*. Similarly, we define a partial equality =
I
such that x =
I
y iff x[I] = y[I]. We partition the set of indices [0, k - 1] into τ + 1 parts of size σ and for each part I, sort a separate copy of X with respect to ≺
I
. As noticed above, for every two k-mers x, y ∈ X with d(x, y) ≤ τ, there exists a part I such that x =
I
y. It therefore suffices to separately consider blocks of equivalent k-mers with respect to =
I
for each part I. If a block is small (i.e., of size smaller than a certain threshold), we go over the pairs of k-mers in this block to find those with Hamming distance at most τ. If a block is large, we recursively apply to it the same procedure with a different partition of the indices. In practice, we use two different partitions of [0, k - 1]: the first corresponds to contigious subsets of indices (recall that ):
Algorithm 2 Hamming graph processing
procedure HGPROCESS(X, max_quadratic)
Init components with singletons .
for all ϒ ∈ FindBlocksdo
if |ϒ| > max_quadratic then
for all Z ∈ FindBlocks do
ProcessExhaustively
else
ProcessExhaustively.
function FindBlocks
for s = 0,...,τ do
sort a copy of X with respect to , getting X
s
.
for s = 0,...,τ do
output the set of equiv. blocks .
procedure PROCESS EXHAUSTIVELY
for each pair x, y ∈ ϒ do
if d(x, y) ≤ τ then join their sets in :
for all do
.
while the second corresponds to strided subsets of indices:
BAYES HAMMER uses a two-step procedure, first splitting with respect to (Figure 4) and then, if an equivalence block is large, with respect to . On the block processing step, we use the disjoint set data structure [12] to maintain the set of connected components. Step (2) is summarized in Algorithm 2.
Step (3): Bayesian subclustering
In HAMMER's generative model [8], it is assumed that errors in each position of a k-mer are independent and occur with the same probability ε, which is a fixed global parameter (HAMMER used ε = 0.01). Thus, the likelihood that a k-mer x was generated from a k-mer y under HAMMER's model equals
Under this model, the maximum likelihood center of a cluster is simply its consensus string [8].
In BAYES HAMMER, we further elaborate upon HAMMER's model. Instead of a fixed ε, we use reads quality values that approximate probabilities q
x
[i] of a nucleotide at position i in the k-mer x being erroneous. We combine quality values from identical k-mers in the reads: for a multiset of k-mers X that agree on the jth nucleotide, it is erroneous with probability Πx∈Xq
x
[j].
The likelihood that a k-mer x has been generated from another k-mer c (under the independent errors assumption) is given by
and the likelihood of a specific subclustering C = C1 ∪... ∪ C
m
is
where c
i
is the center (consensus string) of the subcluster C
i
.
In the subclustering procedure (see Algorithm 3), we sequentially subcluster each connected component of the Hamming graph into more and more clusters with the classical k-means clustering algorithm (denoted m-means since k has different meaning). For the objective function, we use the likelihood as above penalized for overfitting with the Bayesian information criterion (BIC) [13]. In this case, there are |C| observations in the dataset, and the total number of parameters is 3 km + m - 1:
-
m - 1 for probabilities of subclusters,
-
km for cluster centers, and
-
2 km for error probabilities in each letter: there are 3 possible errors for each letter, and the probabilities should sum up to one. Here error probabilities are conditioned on the fact that an error has occurred (alternatively, we could consider the entire distribution, including the correct letter, and get 3 km parameters for probabilities but then there would be no need to specify cluster centers, so the total number is the same).
Algorithm 3 Bayesian subclustering
for all connected components C of the Hamming graph do
m := 1
ℓ1 := 2 log L1(C) (likelihood of the cluster generated by the consensus)
repeat
m := m + 1
do m-means clustering of C = C1 ∪...∪ C
m
w.r.t. the Hamming distance; the initial approximation to the centers is given by k-mers that have the least error probability
ℓ
m
:= 2 · log L
m
(C1,...,C
m
) (3 km + m - 1) · log |C|
until
ℓ
m
≤
ℓ
m-1
output the best found clustering C = C1 ∪...∪ Cm-1
Therefore, the resulting objective function is
for subclustering into m clusters; we stop as soon as ℓ
m
ceases to increase.
Steps (4) and (5): selecting solid k-mers and expanding the set of solid k-mers
We define the quality of a k-mer x as the probability that it is error-free: . The k-mer qualities are computed on Step (1) along with computing k-mer statistics. Next, we (generously) define the quality of a cluster C as the probability that at least one k-mer in C is correct:
In contrast to HAMMER, we do not distinguish whether the cluster is a singleton (i.e., |C| = 1); there may be plenty of superfluous clusters with several k-mers obtained by chance (actually, it is more likely to obtain a cluster of several k-mers by chance than a singleton of the same total multiplicity).
Initially we mark as solid the centers of the clusters whose total quality exceeds a predefined threshold (a global parameter for BAYES HAMMER, set to be rather strict). Then we expand the set of solid k-mers iteratively: if a read is completely covered by solid k-mers we conclude that it actually comes from the genome and mark all other k-mers in this read as solid, too (Algorithm 4).
Step (6): reads correction
After Steps (1)-(5), we have constructed the set of solid k-mers that are presumably error-free. To construct corrected reads from the set of solid k-mers, for each base of every read, we compute the consensus of all solid k-mers and solid centers of clusters of all non-solid k-mers covering this base (Figure 5). This step is formally described as Algorithm 5.
Algorithm 4 Solid k-mers expansion
procedure ITERATIVE EXPANSION(R, X)
while ExpansionStep(R, X) do
function EXPANSION STEP(R, X)
for all reads r ∈ R do
if r is completely covered by solid k-mers then
mark all k-mers in r as solid
Return TRUE if X has increased and FALSE otherwise.
Algorithm 5 Reads correction
Input: reads R, solid k-mers X, clusters .
for all reads r ∈ R do
init consensus array υ: [0, |r| - 1] × {A, C, G, T} → ℕ with zeros: υ(j, x[i]):= 0 for all i = 0,...,|r| - 1 and j = 0,...,k - 1
for i = 0,...,|r| - k do
if r[i, i + k - 1] ∈ X (it is solid) then
for j ∈ [i, i + k - 1] do
υ(j, r[i]):= υ(j, r[i]) + 1
if r[i, i + k - 1] ∈ C for some C ∈ then
let x be the center of C
if x ∈ X (r belongs to a cluster with solid center) then
for j ∈ [i, i + k - 1] do
υ(j, x[i]):= υ(j, x[i]) + 1
for i ∈ [0, |r| - 1] do
r[i]:= arg maxa∈Συ(i, a).