Volume 14 Supplement 1
Selected articles from the Eleventh Asia Pacific Bioinformatics Conference (APBC 2013): Genomics
Expanding the boundaries of local similarity analysis
 W Evan Durno^{1},
 Niels W Hanson^{2},
 Kishori M Konwar^{1} and
 Steven J Hallam^{1}Email author
DOI: 10.1186/1471216414S1S3
© Durno et al.; licensee BioMed Central Ltd. 2013
Published: 21 January 2013
Abstract
Background
Pairwise comparison of time series data for both local and timelagged relationships is a computationally challenging problem relevant to many fields of inquiry. The Local Similarity Analysis (LSA) statistic identifies the existence of local and lagged relationships, but determining significance through a pvalue has been algorithmically cumbersome due to an intensive permutation test, shuffling rows and columns and repeatedly calculating the statistic. Furthermore, this pvalue is calculated with the assumption of normality  a statistical luxury dissociated from most real world datasets.
Results
To improve the performance of LSA on big datasets, an asymptotic upper bound on the pvalue calculation was derived without the assumption of normality. This change in the bound calculation markedly improved computational speed from O(pm^{2}n) to O(m^{2}n), where p is the number of permutations in a permutation test, m is the number of time series, and n is the length of each time series. The bounding process is implemented as a computationally efficient software package, FAST LSA, written in C and optimized for threading on multicore computers, improving its practical computation time. We computationally compare our approach to previous implementations of LSA, demonstrate broad applicability by analyzing time series data from public health, microbial ecology, and social media, and visualize resulting networks using the Cytoscape software.
Conclusions
The FAST LSA software package expands the boundaries of LSA allowing analysis on datasets with millions of covarying time series. Mapping metadata onto forcedirected graphs derived from FAST LSA allows investigators to view correlated cliques and explore previously unrecognized network relationships. The software is freely available for download at: http://www.cmde.science.ubc.ca/hallam/fastLSA/.
Background
The exponential increase and ubiquitous use of computational technology has given rise to an era of "Big Data" that pushes the limits of conventional data analysis [1–3]. Techniques for analyzing big datasets often proceed by identifying patterns of cooccurrence or correlation through principal component analysis (PCA) [4], multidimensional scaling (MDS) [5], etc. However, many of these methods require significant data reduction or smoothing which makes them difficult to interpret [6]. Other methods such as multiple linear regression or Pearson's correlation coefficient (PCC) are easy to interpret as they operate on data in their native data space, without any kind of large data transformation or dimensionality reduction, but are limited in the structure that they can detect.
Though PCC is a classic and powerful technique for finding linear relationships between two variables, it is not designed for capturing leadlag relationships seen in time series data. Local similarity analysis (LSA) [6] extends correlation calculations to include the time variable, enabling identification of local correlates. Furthermore, Ruan et al. have presented a graphical network framework in which to visualize the structure of significant LSA correlations. Unfortunately, the current implementation of LSA requires multiple runs on permuted data and a Monte Carlo statistical method known as a permutation test to evaluate a null distribution and obtain a pvalue determining significance. Each iteration of this procedure has a computational complexity of O(pm^{2}n), where p is the number of permutations, m is the number of covariate time series, and n is their length. Due to the number of pairwise calculations needed, extant LSA is computationally onerous when m is large, limiting its use to datasets where the number of observed variables at each time point is small (< 100). Though there has been some improvement to its performance [7], assumptions of normality and implementation issues continue to stymie practical application of LSA on big datasets.
Here we describe a novel asymptotic upper bound on the calculation of the LSA statistic's pvalue, resulting in an exponentially converging calculation to bound and check for significance of computed LSA statistics without a computationally intensive permutation test. This bound does not require a rankvector normal transformation, promoting application to any distribution that has finite variance. As a result, this implementation of LSA can navigate big datasets with millions of covariate time series. We demonstrate this using timeseries datasets from public health [8], microbial ecology [9], and social media [10]. The implemented algorithm, named FAST LSA, is written in C and optimized for threading on multicore computers.
Interpreting the LSA statistic
LSA is advantageous on large datasets containing many time series. Results can be visualized as a graphical network where nodes represent the individual time series and the edges represent their LSA correlation statistic. When displayed using a forcedirected layout in Cytoscape [11], closely related time series cluster together, visually isolating clusters of local similarity. Metadata related to experimental or environmental conditions can then be applied to the nodes, shedding insight into hierarchical network structure.
Implementation
Description of the LSA algorithm
In this section we reproduce the algorithm from [6] to compute LSA statistics and their corresponding pvalues between pairs of time series in a dataset. We assume as input a set of time series vectors of equal length. Let us denote the number of time series by m and their length as n. Let us denote the time series dataset as X where X_{ ij } denotes the j th element of the i th time series, with i = 1, 2, ..., m and j = 1, 2, ..., n, and assume that the X_{ ij } are real numbers. We also assume that there are no missing values in the dataset X, and realize that practical use will require interpolation or filtering.
The algorithm first initializes the arrays P_{j,0}, N_{j, 0}, P_{0, i}, and N_{0,i}for all i, j = 1, ..., n, with a maximum absolute difference of D. Next it considers the time series pairs for each possible lag, up to a maximum of D, and then computes the progressive sum of the pairwise products of the timeseries values from the low to high index of the arrays. During the computation, the progression of the partial sum is reset to 0 if the sum is below 0. After partial sums have been computed, the values of $\hat{N}$ and $\hat{P}$ are calculated by taking the maximum of the corresponding values of the arrays N and P. Finally, the LSA statistic is estimated as $sign\left(\hat{P}\hat{N}\right)\frac{\mathsf{\text{max}}\left\{\hat{P},\hat{N}\right\}}{n}$.
Calculating the upper bound
In this section we derive the asymptotic upper bound on the pvalue for the cumulative probability distribution of the LSA statistic without the need of a normality assumption. Our derivation is based on distributional results of the maximum cumulative sum of independent random variables known in the literature from probability theory [12–15]. We begin by stating our assumptions about the dataset, isolate target calculations from the LSA algorithm, and from our referenced mathematical results, derive and prove important lemmas. These lemmas will serve as the building blocks as we logically construct a theorem which will form the basis of our LSA pvalue upper bound.
We begin by making certain assumptions about the probability model used to derive the bounds. First, each P_{ i, j }or N_{ i, j }is considered individually. We assume that the time series values X_{ i }, Y_{ j } for i, j = 1, ..., n are independent of one another. This assumption can be made when weak dependence exists because it is near the truth and effective, much like the Naive Bayes assumption. This assumption is also enabling, as it allows us to invoke the distributions of partial sums of independent random variables and continue in a mathematically straightforward way. Further, we assume independence between each time time series as a null hypothesis, and as it is subject to rejection upon obtaining a statistically significant LSA value.
Consider lines 5 and 7 of the LSA algorithm (Figure 2), P_{i+k+1,j+k+1}← max{0, P_{i+k, j+k}+ X_{i+k}* Y_{j+k}} and $\hat{P}\leftarrow {\mathsf{\text{max}}}_{\left\{\left(i,j\right):ij\le D\right\}}\left\{{P}_{i,j}\right\}$. For any pair of i and j let us define the sequence random variables as Z_{ k } = X_{i+k}Y_{j+k}for k = 0, ..., min{n  i, n  j}  1, and the sequence of random variables ζ_{ k } = Z_{1} + ... + Z_{ k } for k = 0, ..., min{n  i, n  j}  1 supposing ζ_{0} = 0. Using the above ζ_{ k }'s, we define random variables ${\eta}_{k}^{*}$ as ${\eta}_{k}^{*}=\mathsf{\text{max}}\left\{{\zeta}_{1},{\zeta}_{2},\cdots \phantom{\rule{0.3em}{0ex}},{\zeta}_{k}\right\}$ for the same values of k = 0, ..., min{n  i, n  j}  1.
We also define the set of random variables η_{1}, η_{2}, ..., η_{ k } by the recurrence formula η_{k+1}= max{0, η_{ k }+ Z_{k+1}}. Note that the random variables P_{i+k, j+k}and the η_{ k } have the same distribution. It is shown in [12, 13] that the random variables ${\eta}_{k}^{*}$ and η_{ k } also have the same distribution. As a result, now we can analyze the cumulative distribution of P_{i+k, j+k}as a distribution for ${\eta}_{k}^{*}$, and use the results by Nevzorov and Petrov [14] on P_{i+k, j+k}to derive tail probability bounds. We also assume that the random variables Z_{ k } have the first two moments, although such assumptions are not required for the results of [14], we use them to derive simpler bounds.
We now consider a few useful lemmas that we will use to construct our pvalue upper bound. The first step is to simplify the tail event (which we will later connect to pvalue) into simpler terms. The following lemma expresses the tail event for LSA {LSA >x} and any $x\in \mathbb{R}$ in terms of the tail events of {P_{ i, j }>x} and {N_{ i, j }>x}, the positive and negative LSA calculations for the same x, the bound on our test statistic (the target pvalue).
Lemma 1 For any $x\in \mathbb{R}$we have {LSA >x} = {(∪_{ ij }{P_{ ij } >xn}) ∪ (∪_{ ij }{N_{ ij } >xn})}.
Proof. The result is clear from the following:
$\begin{array}{c}\left\{\leftLSA\right>x\right\}=\left\{max\left\{\hat{P},\hat{N}\right\}>xn\right\}={\left\{\hat{P}\le xn\cap \hat{N}\le xn\right\}}^{c}={\left\{ma{x}_{ij}\left\{{P}_{ij}\right\}\le xn\cap ma{x}_{ij}\left\{{N}_{ij}\right\}\le xn\right\}}^{c}=\hfill \\ {\left\{\left({\cap}_{ij}\left\{{P}_{ij}\le xn\right\}\right)\cap \left({\cap}_{ij}\left\{{N}_{ij}\le xn\right\}\right)\right\}}^{c}=\left\{\left({\cup}_{ij}\left\{{P}_{ij}>xn\right\}\right)\cup \left({\cup}_{ij}\left\{{N}_{ij}>xn\right\}\right)\right\}\hfill \end{array}$ □
In the LSA algorithm, we have maximums P_{ ij } = max{0, P_{i1, j1}+ X_{i1}Y_{j1}} and N_{ ij }= max{0, P_{i1,j1} X_{i1, j1}}, which complicates their theoretical analysis. Fortunately, equivalence have been demonstrated in the literature [12], and we restate these in the following lemma for clarity: the similarity of the distributions of η_{ k } and ${\eta}_{k}^{*}$, for k = 1, ..., min{n  i, n  j}  1. This will help us derive the bounds for the events {P_{ ij } >xn} and {N_{ ij } >xn}, the simpler terms we derived in the previous lemma.
Lemma 2 Let Z_{ i } be mutually independent random variables and let us denote by ${S}_{k}={\sum}_{i=1}^{k}{Z}_{i}$where S_{0} = 0, and q_{k+1}= max{0, q_{ k } + Z_{ k }} with q_{0} = 0, then P(q_{ k } ≤ x) = P (max{S_{0}, ..., S_{k1}} ≤ x) for $x\in \mathbb{R}$.
In order to get a simple formula for the bound on the cumulative tail probabilities for P_{ i, j } and N_{ i, j } we reproduce below the results on partial sums of random variables due to Nevzorov and Petrov [14]. For our sequence of independent and identically distributed (iid) random variables under consideration {X_{ n }} it follows that Lindeberg's condition holds [15]. A property showing the variance of a distribution stabilizes as more variables are added, pinning the tails of it down. Thinking about this in terms of time series, as a series gets larger, the upper bound of the distribution becomes more defined and calculable.
Now to build theorems upon which we will derive a formulaic pvalue bound.
Theorem 3 If the random variables {X_{ n }} have zero expectation and finite variances and if Lindeberg's condition holds: Λ_{ n }(ε) → 0 as n → ∞ ∀ε > 0 where ${\Lambda}_{n}\left(\epsilon \right)=\frac{1}{{q}_{n}^{2}}{\sum}_{k=1}^{n}{\int}_{\left\{\leftx\right>\epsilon {q}_{n}\right\}}{x}^{2}d{V}_{k}\left(x\right)$and ${q}_{n}^{2}={\sum}_{k=1}^{n}\mathit{E}\left({X}_{k}^{2}\right)$and $G\left(x\right)=\sqrt{\frac{2}{\pi}}{\int}_{0}^{x}{e}^{{t}^{2}/2}dt$if x ≥ 0 and 0 if x < 0, then we have ${\mathsf{\text{sup}}}_{x}\mathbf{P}\left({\overline{S}}_{n}<{q}_{n}x\right)G\left(x\right)\to 0$where ${\overline{S}}_{n}={\mathsf{\text{max}}}_{1\le k\le n}{\sum}_{j=1}^{k}{X}_{j}$and V_{ k }(x) = P(X_{ k } ≤ x)
In order to apply the above theorem to get a simple formulaic approximation, we assume some random variables ${\left\{{Z}_{i}\right\}}_{1}^{m}$, each with the variance σ^{2} and ${S}_{k}={\sum}_{i=1}^{k}{Z}_{i}$. Then by applying the above theorem, we get the following uniform convergence of distribution to that of the onesided standard normal as ${\mathsf{\text{sup}}}_{x}\mathbf{P}\left(ma{x}_{k\in \left\{1,\dots ,m\right\}}{S}_{k}\le \sqrt{m}\sigma x\right)G\left(x\right)\phantom{\rule{2.77695pt}{0ex}}\to 0$as m → ∞.
Now we use the above results to get the probability estimates for our simple event terms {P_{ ij } >xn} and {N_{ ij } >xn}. The following theorem provides us with the pvalue's tail bound for LSA for any $x\in \mathbb{R}$.
Theorem 4 For G, the onesided normal distribution, defined above $\mathsf{\text{P}}\left(\leftLSA\right>x\right)\le 2\left({n}^{2}\left(nD1\right)\left(nD\right)\right)\left(1G\left(x\sqrt{n/V\alpha r\left({X}_{1}{Y}_{1}\right)}\right)\right).$
Empirical pvalue (Emp) & the FAST LSA pvalue bound (Fas) with n = 30, 50, & 100 time steps.
x1  n30Emp  n30Fas  n50Emp  n50Fas  n100Emp  n100Fas 

0.05  1  1.000  1  1.000  1  1.000 
0.07  1  1.000  1  1.000  0.997  1.000 
0.09  1  1.000  0.999  1.000  0.953  1.000 
0.11  0.999  1.000  0.984  1.000  0.819  1.000 
0.13  0.989  1.000  0.928  1.000  0.627  1.000 
0.15  0.958  1.000  0.823  1.000  0.441  1.000 
0.17  0.896  1.000  0.687  1.000  0.292  1.000 
0.19  0.803  1.000  0.545  1.000  0.184  1.000 
0.21  0.694  1.000  0.417  1.000  0.111  1.000 
0.23  0.58  1.000  0.309  1.000  0.064  1.000 
0.25  0.472  1.000  0.224  1.000  0.036  1.000 
0.27  0.376  1.000  0.158  1.000  0.019  0.693 
0.29  0.294  1.000  0.109  1.000  0.009  0.373 
0.31  0.227  1.000  0.073  1.000  0.005  0.194 
0.33  0.172  1.000  0.048  0.981  0.002  0.097 
0.35  0.128  1.000  0.031  0.666  0.001  0.047 
0.37  0.094  1.000  0.019  0.444  < 0.001  0.022 
0.39  0.067  0.98  0.012  0.291  < 0.001  0.01 
0.41  0.048  0.742  0.007  0.187  < 0.001  0.004 
0.43  0.033  0.555  0.004  0.118  < 0.001  0.002 
0.45  0.023  0.411  0.002  0.073  < 0.001  0.001 
0.47  0.015  0.301  0.001  0.044  < 0.001  < 0.001 
0.49  0.01  0.218  0.001  0.027  < 0.001  < 0.001 
0.51  0.006  0.156  < 0.001  0.016  < 0.001  < 0.001 
0.53  0.004  0.111  < 0.001  0.009  < 0.001  < 0.001 
0.55  0.002  0.078  < 0.001  0.005  < 0.001  < 0.001 
0.57  0.001  0.054  < 0.001  0.003  < 0.001  < 0.001 
0.59  0.001  0.037  < 0.001  0.002  < 0.001  < 0.001 
0.61  < 0.001  0.025  < 0.001  0.001  < 0.001  < 0.001 
0.63  < 0.001  0.017  < 0.001  < 0.001  < 0.001  < 0.001 
0.65  < 0.001  0.011  < 0.001  < 0.001  < 0.001  < 0.001 
0.67  < 0.001  0.007  < 0.001  < 0.001  < 0.001  < 0.001 
0.69  < 0.001  0.005  < 0.001  < 0.001  < 0.001  < 0.001 
0.71  < 0.001  0.003  < 0.001  < 0.001  < 0.001  < 0.001 
0.73  < 0.001  0.002  < 0.001  < 0.001  < 0.001  < 0.001 
0.75  < 0.001  0.001  < 0.001  < 0.001  < 0.001  < 0.001 
0.77  < 0.001  0.001  < 0.001  < 0.001  < 0.001  < 0.001 
0.79  < 0.001  < 0.001  < 0.001  < 0.001  < 0.001  < 0.001 
0.81  < 0.001  < 0.001  < 0.001  < 0.001  < 0.001  < 0.001 
0.83  < 0.001  < 0.001  < 0.001  < 0.001  < 0.001  < 0.001 
0.85  < 0.001  < 0.001  < 0.001  < 0.001  < 0.001  < 0.001 
0.87  < 0.001  < 0.001  < 0.001  < 0.001  < 0.001  < 0.001 
0.89  < 0.001  < 0.001  < 0.001  < 0.001  < 0.001  < 0.001 
0.91  < 0.001  < 0.001  < 0.001  < 0.001  < 0.001  < 0.001 
0.93  < 0.001  < 0.001  < 0.001  < 0.001  < 0.001  < 0.001 
0.95  < 0.001  < 0.001  < 0.001  < 0.001  < 0.001  < 0.001 
0.97  < 0.001  < 0.001  < 0.001  < 0.001  < 0.001  < 0.001 
0.99  < 0.001  < 0.001  < 0.001  < 0.001  < 0.001  0.001 
Results
To validate versatility and effectiveness of the derived upper bound (Theorem 4), we applied the algorithm to four datasets, two sourced from biology, one from social networking, and a randomly generated control dataset. These include the Moving Pictures of the Human Microbiome [8] (MPH), the largest human microbial time series to date, a microarray hybridization dataset identifying cell cycle regulated genes in the yeast Saccharomyces cerevisiae [9] (CDC), and an online social media dataset of the volumes of the top 1000 Memetracker phrases and top 1000 twitter hash tags over an eight month period from September 2008 to August 2009 [10]. Missing data values were interpolated by averaging the two nearest temporal data points, and all analysis was performed on a Mac Pro desktop computer running Mac OSX 10.6.8 with a 2 × 2.4 Ghz QuadCore Intel Xeon processors and 16 GB of 1066 Mhz DDR3 RAM.
Computational complexity
Empirical running time for LSA calculation for data sets of different size
Time series  Time points  fastLSA (single thread)  fastLSA (16 threads)  

1,000  130  6 sec  1 sec  
CDC  6,178  24  3.24 min  2.2 sec 
MPH  14,105  390  58 min  7.5 min 
First Null  100,000  100    54 min 
Second Null  1,000,000  30    2 days 3 hrs 
Third Null  1,000,000  100    7 days 23 hrs 
Moving pictures of the human microbiome (MPH)
The MPH time series dataset [8] investigates temporal variations in the microbial community structure of two healthy human subjects, one male, one female. Samples were collected from three body parts, the gut (feces), mouth, and skin (left and right palms) daily for 15 months (male) and six months (female) with taxonomy being determined by the amplified V4 region of the small subunit ribosomal RNA (SSU or 16S rRNA) gene. The male and female samples were concatenated together resulting in a profile of 14105 taxa for 390 time points with missing values being interpolated by the average of the two nearest time points.
Microarray hybridization detection cell cycleregulated genes in yeast Saccharomyces cerevisae (CDC)
Social media: top 1000 Twitter and Memetracker phrases (Twitter)
Null hypothesis simulated data
Discussion
FAST LSA uses a novel asymptotic upper bound algorithm for calculating the LSA pvalue. This is done without any normality assumption, extending implementation to untransformed data and data in violation of normality assumptions such as time series containing many zero entries. Moreover, FAST LSA replaces a computationally intensive permutation test that was previously required to calculate significance of LSA statistics with a dramatic increase on the size of datasets that can be analyzed on a single desktop machine. However, like all asymptotic bounds, a significant number of observations need to be obtained for their application. From theoretical simulation, we estimate this to be greater than 30 time points for most datasets. This is supported by our experience on the CDC and MPH datasets having 24 and 390 time series, respectively. Despite this potential operating constraint, FAST LSA expands the boundaries of LSA allowing time series analysis on datasets with millions of covariate time series. The algorithm is implemented as a computationally efficient software package, FAST LSA, written in C and optimized for threading on multicore computers using POSIX threads. Finally, we demonstrated the utility and versatility of FAST LSA using realworld and simulated time series datasets from different fields of inquiry, visualizing the resulting clusters of local similarity using the Cytoscape software.
LSA statistics have been demonstrated to capture relevant local similarity structure for a number of biological datasets [16, 17]. However, previous implementations were limited to relatively small datasets. FAST LSA improves the computational efficiency and statistical robustness of LSA, a necessary step in using the statistic to explore next generation time series datasets. Despite the current improvements, the structure captured by LSA is less than ideal and could be further improved. Given two vectors of time series, LSA reports the strongest statistic. However, it is unclear where this significant time window occurs, or if there are multiple small windows with large LSA values that are not reported. An inspection of time series traces in question is often required to visually check exactly how the two are similar. Another hazard is that LSA does not handle missing data effectively, and so a continuous version of the statistic would be desirable for exploratory experiments where sampling conditions could change to small degrees and analysis could be performed without imputation. Furthermore, LSA is asymmetric in nature, meaning that time reversal has the potential to produce differing LSA values. We anticipate even better performance from the statistic if these issues were addressed, perhaps through a modified version of PCC that isolates optimal windows of similarity.
Conclusions
LSA is a local similarity statistic that has recently been used to capture relevant local structure in time series datasets, particularly within the biological community. However, its use has been limited to smaller datasets due to an intensive permutation test used to calculate significance. Our derivation and direct calculation of an asymptotic upper bound using FAST LSA replaces this onerous calculation without a normality assumption, enabling LSA on time series datasets containing millions of covariate time series. We demonstrate the utility and versatility of FAST LSA by analyzing time series data from public health, microbial ecology, and social media and compare these results to the previous implementation of LSA, obtaining similar results with orders of magnitude increase in throughput.
Project name: fastLSA
Project home page: http://www.cmde.science.ubc.ca/hallam/fastLSA/
Operating system(s): OS X, Linux, or Windows
Programming Languages: C /C++
Other requirements: 1 GB RAM
License: GPLv3
Nonacademic restrictions: None
Declarations
The publication costs for this article were funded by Genome British Columbia and Genome Canada.
This article has been published as part of BMC Genomics Volume 14 Supplement 1, 2013: Selected articles from the Eleventh Asia Pacific Bioinformatics Conference (APBC 2013): Genomics. The full contents of the supplement are available online at http://www.biomedcentral.com/bmcgenomics/supplements/14/S1.
List of abbreviations
 LSA :

Local Similarity Analysis
 PCC :

Pearson's Correlation Coefficient
 PCA :

Principal Component Analysis
 MDS :

Multidimensional Scaling
 DFA :

Discriminant Fraction Analysis
 MPH :

Moving Pictures of the Human Microbiome
 CDC :

Centre of Disease Control.
Declarations
Acknowledgements
We would like to acknowledge Dr.Fengzhu Sun and Dr.Jed Fuhrman at the University of Southern California for their support.
Authors’ Affiliations
References
 Lynch C: Big data: How do your data grow?. Nature. 2008, 455 (7209): 2829. 10.1038/455028a.View ArticlePubMedGoogle Scholar
 Bell G, Hey T, Szalay A: Computer science. Beyond the data deluge. Science. 2009, 323 (5919): 12971298. 10.1126/science.1170411.View ArticlePubMedGoogle Scholar
 Schadt EE, Linderman MD, Sorenson J, Lee L, Nolan GP: Computational solutions to largescale data management and analysis. Nature Reviews Genetics. 2010, 11 (9): 647657. 10.1038/nrg2857.PubMed CentralView ArticlePubMedGoogle Scholar
 Ranjard L, Poly F, Lata JC, Mougel C, Thioulouse J, Nazaret S: Characterization of bacterial and fungal soil communities by automated ribosomal intergenic spacer analysis fingerprints: biological and methodological variability. Applied and Environmental Microbiology. 2001, 67 (10): 44794487. 10.1128/AEM.67.10.44794487.2001.PubMed CentralView ArticlePubMedGoogle Scholar
 Mooy BASV, Devol AH, Keil RG: Relationship between bacterial community structure, light, and carbon cycling in the eastern subarctic North Pacific. Limnology and Oceanography. 2004, 10561062.Google Scholar
 Ruan Q, Dutta D, Schwalbach MS, Steele JA, Fuhrman JA, Sun F: Local similarity analysis reveals unique associations among marine bacterioplankton species and environmental factors. Bioinformatics. 2006, 22 (20): 25322538. 10.1093/bioinformatics/btl417.View ArticlePubMedGoogle Scholar
 Xia LC, Steele JA, Cram JA, Cardon ZG, Simmons SL, Vallino JJ, Fuhrman JA, Sun F: Extended local similarity analysis (eLSA) of microbial community and other time series data with replicates. BMC Syst Biol. 2011, 5 (Suppl 2): S1510.1186/175205095S2S15.PubMed CentralView ArticlePubMedGoogle Scholar
 Caporaso JG, Lauber CL, Costello EK, BergLyons D, Gonzalez A, Stombaugh J, Knights D, Gajer P, Ravel J, Fierer N, Gordon JI, Knight R: Moving pictures of the human microbiome. Genome Biol. 2011, 12: R5010.1186/gb2011125r50.PubMed CentralView ArticlePubMedGoogle Scholar
 Spellman PT, Sherlock G, Zhang MQ, Iyer VR, Anders K, Eisen MB, Brown PO, Botstein D, Futcher B: Comprehensive identification of cell cycleregulated genes of the yeast Saccharomyces cerevisiae by microarray hybridization. Molecular Biology of the Cell. 1998, 9 (12): 32733297. 10.1091/mbc.9.12.3273.PubMed CentralView ArticlePubMedGoogle Scholar
 Yang J, Leskovec J: Patterns of temporal variation in online media. Proceedings of the Fourth ACM International Conference on Web Search and Data Mining. 2011, 177186.View ArticleGoogle Scholar
 Shannon P, Markiel A, Ozier O, Baliga NS, Wang JT, Ramage D, Amin N, Schwikowski B, Ideker T: Cytoscape: a software environment for integrated models of biomolecular interaction networks. Genome Research. 2003, 13 (11): 24982504. 10.1101/gr.1239303.PubMed CentralView ArticlePubMedGoogle Scholar
 Takacs L: On the distribution of the maximum of sums of mutually independent and identically distributed random variables. Advances in Applied Probability. 1970, 2: 344354. 10.2307/1426323.View ArticleGoogle Scholar
 Wald A: On the distribution of the maximum of successive cumulative sum of independent but not identically distributed chance variables. Bulletin of the American Mathematical Society. 1948, 54: 422430. 10.1090/S000299041948090218.View ArticleGoogle Scholar
 Nevzorov VB, Petrov VV: On the distribution of the maximum cumulative sum of independent random variables. Theory of Probability and its Applications. 1969, 14 (4): 682687. 10.1137/1114083.View ArticleGoogle Scholar
 Lindeberg J: Eine neue Herleitung des Exponentialgesetzes in der Wahrscheinlichkeitsrechnung. Mathematische Zeitschrift. 1922, 15: 211225. 10.1007/BF01494395.View ArticleGoogle Scholar
 Fuhrman JA, Steele JA: Community structure of marine bacterioplankton: patterns, networks, and relationships to function. Aquatic Microbial Ecology. 2008, 53: 6981.View ArticleGoogle Scholar
 Steele JA, Countway PD, Xia L, Vigil PD, Beman JM, Kim DY, Chow CET, Sachdeva R, Jones AC, Schwalbach MS, Rose JM, Hewson I, Patel A, Sun F, Caron DA, Fuhrman JA: Marine bacterial, archaeal and protistan association networks reveal ecological linkages. The ISME Journal. 2011, 5 (9): 14141425. 10.1038/ismej.2011.24.PubMed CentralView ArticlePubMedGoogle Scholar
 Cherry JM, Hong EL, Amundsen C, Balakrishnan R, Binkley G, Chan ET, Christie KR, Costanzo MC, Dwight SS, Engel SR, Fisk DG, Hirschman JE, Hitz BC, Karra K, Krieger CJ, Miyasato SR, Nash RS, Park J, Skrzypek MS, Simison M, Weng S, Wong ED: Saccharomyces Genome Database: the genomics resource of budding yeast. Nucleic Acids Res. 2012, 40: D700D705. 10.1093/nar/gkr1029.PubMed CentralView ArticlePubMedGoogle Scholar
 Ashe M, deBruin RA, Kalashnikova T, McDonald WJ, Yates JR, Wittenberg C: The SBF and MBFassociated protein Msa1 is required for proper timing of G1specific transcription in Saccharomyces cerevisae. Journal of Biological Chemistry. 2007, 283: 60406049.View ArticlePubMedGoogle Scholar
 Ewen ME: Where the cell cycle and histones meet. Genes Dev. 2000, 14: 22652270. 10.1101/gad.842100.View ArticlePubMedGoogle Scholar
Copyright
This article is published under license to BioMed Central Ltd. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.