Skip to main content

Simultaneous inferences based on empirical Bayes methods and false discovery rates ineQTL data analysis



Genome-wide association studies (GWAS) have identified hundreds of genetic variants associated with complex human diseases, clinical conditions and traits. Genetic mapping of expression quantitative trait loci (eQTLs) is providing us with novel functional effects of thousands of single nucleotide polymorphisms (SNPs). In a classical quantitative trail loci (QTL) mapping problem multiple tests are done to assess whether one trait is associated with a number of loci. In contrast to QTL studies, thousands of traits are measured alongwith thousands of gene expressions in an eQTL study. For such a study, a huge number of tests have to be performed (~1 0 6 ). This extreme multiplicity gives rise to many computational and statistical problems. In this paper we have tried to address these issues using two closely related inferential approaches: an empirical Bayes method that bears the Bayesian flavor without having much a priori knowledge and the frequentist method of false discovery rates. A three-component t-mixture model has been used for the parametric empirical Bayes (PEB) method. Inferences have been obtained using Expectation/Conditional Maximization Either (ECME) algorithm. A simulation study has also been performed and has been compared with a nonparametric empirical Bayes (NPEB) alternative.


The results show that PEB has an edge over NPEB. The proposed methodology has been applied to human liver cohort (LHC) data. Our method enables to discover more significant SNPs with FDR<10% compared to the previous study done by Yang et al. (Genome Research, 2010).


In contrast to previously available methods based on p-values, the empirical Bayes method uses local false discovery rate (lfdr) as the threshold. This method controls false positive rate.


Genome-wide association studies (GWASs) have done a remarkable progress in searching for susceptibility genes. In GWAS, instead of one gene at a time, variation across the entire genome is tested for association with disease risk. GWASs exploit the linkage disequilibrium (LD) relationships among single nucleotide polymorphisms (SNPs), making it possible to assay genome by testing a finite number of SNPs. Till date, the signals that can be discovered through GWAS has not been reported exhaustively. It is important to annotate SNPs information on expression for the better understanding of the genes and mechanisms driving the association. In many situations, there are more common variants truly associated with disease. These variants are highly likely to be expression quantitative trait loci (eQTLs). eQTLs are derived from polymorphisms in the genome that result in differential measurable transcript levels. Microarrays are used to measure gene expression levels across genetic mapping populations. For at least a subset of complex disorders, gene expression levels could be used as a surrogate/biomarker for classical phenotypes. The gene underlying the eQTL is considered to be an excellent candidate for phenotypic QTL.

eQTL mapping is a statistical technique to locate genomic intervals, that are likely to regulate the expression of each transcript, by correlating quantitative measurements of mRNA expression with genetic polymorphisms segregating in a population. In a GWAS, millions of SNPs are tested at once. Associations that initially appear to be significant must be statistically adjusted to account for the large number of tests being performed. A large number of false positives will result in if this correction is ignored. The multiple-testing correction, however, sets a very high threshold for genome-wide significance, on the order of 5×1 0 - 8 when a million SNPs are tested. In the vast majority cases, however, association studies have achieved only limited success. Large sample sizes are needed to achieve sufficient statistical power to detect risk alleles with effects weak enough to have escaped detection in the past; the disease risk alleles identified by GWASs so far do have weak effects, each with odds ratios of 1.1 or 1.2 [1].

Two closely related inferential procedures for multiple testing have been discussed in this work-afrequentist approach based on Benjamini and Hochberg's ([2]) false discovery rate procedure, and an empirical Bayes methodology developed in Efron et al. [3, 4]. These two methods are not only very closely related, they can be used to support each other. In a classic two-sample problem in a microarray experiment, these approaches have been discussed by Efron and Tibshirani[5]. However, they have considered nonparametric empirical Bayes (NPEB) model. Parametric Bayesian modeling has been considered by Newton et al. [6], Lee et al. [7], Kendziroski et al. [810], Gelfond et al. [11]. Hierarchical models like gamma-gamma [6] or lognormal-normal [8] are used quite often in PEB procedures. These models suffer from a serious drawback that the variation is constant among genes. An extension has been done to these models by considering gene specific variations[12]. The application of empirical Bayes has been somehow not very common in literature. The obvious reason is that, experimenters have not brought us many data sets having the parallel structure necessary for empirical Bayes to do its stuff. Because of the recent surge in high-throughput ([13]) technologies and genome projects, many genome studies are now underway. These studies have become a major data generator in the post-genomics era. Empirical Bayes procedures seem to be particularly well-suited for combining information in expression data.

One of the fundamental statistical problems in microarray gene expression analysis is the need to reduce dimensionality of the transcripts. This can be achieved by identifying differentially expressed (DE) genes under different conditions or groups. Regulatory network can be obtained by associating differential expressions with the genotype of molecular markers. It is possible to have a large number of DE genes that influences a certain phenotype while their relative proportion is very small. It is very important to identify these DE genes from among the number of recorded genes [6, 7, 9, 14, 15]. Empirical Bayes methods provide a natural approach to reduce the dimensionality significantly [16, 17]. Following the empirical Bayes approach DE genes are identified using the posterior probability for differential expression. EB approaches detect a DE gene by sharing information across the whole genome.

The development of the empirical Bayes methodologies that improve the power to detect DE genes essentially reduces to the choice of whether gene-specific effects should be modeled as fixed or random [18]. Both mean and error variance can be of either of these two: fixed or random. Fixed mean and random error variance has been considered by Wright and Simon [19] and Cui et al. [20] whereas Lonnstedt et al. [21], Tai and Speed [22], Lonnstedt and Speed [23] have considered both the parameters to be random. Random mean effect with homogeneous fixed error variance has been considered by Newton et al. [6, 24], Kendziroski et al. [9] and Kendziroski et al. [10]. However an extension to this fixed error variance has been considered by Gelfond et al. [11]. They have considered discrete uniform prior for the variance component.

The paper is organized as follows. In the Methods section we introduce the necessary notations for our additive genetic model along with the notions of false discovery rate (fdr). In this section we have tried to establish the relationship between fdr and empirical Bayes. Methods section also describes, the proposed Expectation/Conditional Maximization Either (ECME) (Liu and Rubin [25]) in details. This algorithm generalizes the Expectation-Maximization algorithm with better convergence rate. A simulation study has been performed and described in the Results section. We show that proposed parametric empirical Bayes performs better compared to nonparametric empirical Bayes in terms of controlled fdr. In the Results section, as an application, we have applied the proposed methodology to the Liver Cohort (LHC) dataset. We conclude the article the Discussion section.


In a microarray experiment, we obtain several thousand expression values, one or many for each gene. These studies offer an unprecedented ability to do large-scale studies of gene expression. Let us define G i i = 1.....l as the genomic marker(i.e. SNP), and T j (j = 1......J) as the transcripts. The identified eQTLs refer to the significant Gs that are associated with Ts. These associations can be found using a test statistics based on all n samples. The genetic model for this association can be one of the three models: dominant, recessive and additive. Under the dominance model, we can have two genotypes for each of the SNPs. However for an additive model, three genotype groups are available. A transcript T j is assumed to be associated with marker G i if the mean level of expression of transcript T j for one genotype group is different from that of the other genotype group corresponding to that marker. Let μ T , G ( 1 ) and μ T , G ( 0 ) be the group means corresponding to the genotypes G i . To test the hypothesis H 0 : μ T , G ( 1 ) = μ T , G ( 0 ) , a few test statistics are proposed for microarray data analysis[26]. The present work is based on the statistic proposed by Efron et al. [4]. The test statistic is defined as

Z i j = x ̄ T , G ( 1 ) - x ̄ T , G ( 0 ) ( a 0 i + S i j )

where S i j is the usual standard deviations and a 0 i is defined to minimize the difference in the coefficient of variation of Z i j within classes of genes with approximately equal variance. A drawback of calculating a 0 i is the computational cost. Note that if a 0 i =0, this reduces to usual t-statistic. Here a 0 i is considered to be 90th percentile of all S i j values (Efron el al. [4]).

When expression measurements between two groups are compared for any transcript, the observations are partitioned into two user defined groups of sizes n 1 and n 2 with n 1 + n 2 =n. If there is no significant difference between the group means, the transcript is assumed to be equivalently expressed (EE). On the contrary, if significant difference is observed, the transcript is termed as differentially expressed (DE). For any transcript T j and SNP G i it may be either of these two: DE or EE. This uncertainty can be modeled by a mixture of two distributions as follows:

f Z i j | θ = π o f 0 Z i j | θ + π 1 f 1 ( Z i j | θ )

where π 0 is the mxining proportion of EE transcripts and π 1 = 1 - π 0 is the proportion of DE transcripts, θ is a vector parameters involved to characterize the distributions. Let F i be the minor allele frequency of the ith SNP then we model the distribution of Z ij as a mixture model of the form:

Pr ( Z i j | F i ) [ f 0 Z i j | F i ] 1 - δ i j [ f 1 Z i j | F i ] δ i j

where f 1 ( . ) denotes the distribution of Z ij for nonzero associations between G i and T j and f 0 ( . ) denotes the distribution of Z i j for the zero associations. δ i j isdefined as

δ i j = 1 i f n o n z e r o a s s o c i a t i o n i s p r e s e n t 0 i f z e r o a s s o c i a t i o n i s p r e s e n t

For any transcript and any SNP there may be three possible relations: no association, positive association and negative association. Extending the idea of two component mixture model, the distribution of the test statistics is modeled by the following mixture model:

f Z i j | ψ i , F i = k = 0 2 π i k f k Z i j ; μ k , τ k 2 , ν k


ψ i = ( π i , θ i , ν i )
π i = π 0 i , π 1 i θ i = μ 1 i , μ 2 i , τ 1 i 2 , τ 2 i 2 ν i = ( ν 1 i , ν 2 i )

with μ 0 i =0, τ 0 i 2 =1. Mixing proportions π i k are nonnegative constantsand sum to one for fixed i. f 0 ( . ) corresponds to distribution for no associationwhereas f 1 ( . ) and f 2 ( . ) correspond to distributions related to positive and negativeassociation respectively. In a recent work, Noma and Matsui [27], have used semiparametric hierarchical mixture model where the distribution of mean expression level of a transcript is considered to be a three-component mixture distribution.

Full Bayesian analysis of (4) will require prior specifications of π,θ,ν, f 0 ( Z ) and f 1 Z . However, one can use the massively parallel structure of microarray data to estimate an empirical Bayes estimate of the posterior probability. These huge data motivates to be quite empirical rather than specifying a-priori models in favor of data-based investigations [27].

Empirical Bayes, false discovery rates (fdr) and local false discovery rate (lfdr)

False discovery rate (fdr) is defined as the expected proportion of errors committed by falsely rejecting null hypotheses. Benjamini and Hochberg's [2]fdr criterion has very close relation with the empirical Bayes analysis. This relation improved the connection between Bayesian and frequentist testing theory. The close connection between fdr and the empirical Bayes methodology follows directly from Bayes theorem and this has been established by the "Equivalence theorem"[28]. Tail area rejection regions like { Z i j < z } are common in the frequentist framework. According to this theorem, if the tail area rejection region is taken to be as large as possible subject to the constraint that the estimated Bayes proportions of false discoveries is less than α, then the frequentist expected proportion of false discoveries is also less than α.

The empirical Bayes approach suggests a local version of the fdr called local false discovery rate (lfdr). The Bayes probability that a transcript T j for SNP G i is "EE" given the test statistic Z i j , is known as lfdr ( Z i j ) and it is defined as

lfdr Z i j Pr T j i s EE | Z i j = π i 0 f 0 ( Z i j ) /f ( Z i j )

Analytically, fdr is a conditional expectation of lfdr defined as

fdr Z i j = - Z i j lfdr Z f Z dZ/ - Z i j f Z dZ= E f { l f d r ( Z | Z Z i j ) }

For the above set up in (3), 1- δ i j represents the local false discovery rate (lfdr) and fdr can be estimated:

δ ^ i j = π i 1 f 1 Z i j ( 1 - π i 0 ) f 0 Z i j + π i 1 f 1 Z i j ;lfdr Z i j =1- δ ^ i j

and hence

f d r Z i j = ( 1 - π 1 i ) Z i j f 0 x d x ( 1 - π 1 i ) Z i j f 0 x d x + π 1 i Z i j f 1 x d x

ν 0 i is estimated bypermutation method (Efron et al. [4]) and p o i is estimated from the nonnegative constraint

p 0 i min Z f i ( Z ) f i 0 ( Z )

All other parameters will be estimated by EM algorithm assuming f i 0 ( . ) to be known. There are some practical difficulties with the lfdr that relies on densities. The estimation of null becomes more problematic in the far tails. It is relatively easier to work with cumulative distribution function than work with densities. Identification of discoveries by lfdr may not be reproducible for a new data. Therefore, even in empirical Bayes framework, fdr should be preferred.

Nonparametric empirical Bayes (NPEB)

The main difference between parametric empirical Bayes (PEB) and nonparametric empirical Bayes (NPEB) is the way in which f 1 ( . ) and f 2 ( . ) are treated. In PEB model, the functional form of f 1 ( . ) and f 2 ( . ) are known, i.e., we have a parametric family of priors. In contrast, the NPEB does not assume the functional form to be known. Though NPEB methods are quite powerful, these are more suitable for large sample analyses. To compute the fdr under NPEB setup, we have followed the algorithm proposed by Efron et al. [4].

ECME algorithm

To fit a mixture model, EM algorithm is widely used. In case of t distribution the mean parameter μ and variance component τ 2 can easily be estimated by EM algorithm assuming that degrees of freedom ν is known. However when ν is unknown EM still can be used as demonstrated by Lange, Little and Taylor [29]. But this method appears to be very slow (Liu and Rubin [30]) and an extension has been proposed by Meng and Rubin [31] as ECM algorithm. This is a generalization of EM algorithm where the E step remains the same butthe M step is replaced by CM (constrained or conditional maximization) step. ECM algorithm is basically a generalized EM (GEM) as shown by Meng and Rubin [31]. Incidentally, the rate of convergence, in terms of iterations, for this ECM algorithm is slower compared to EM. To overcome this computational problem, Liu and Rubin [30] propose an efficient algorithm ECME which is again an extension of ECM algorithm. Though this is not a GEM, it converges faster.

For the i -th SNP, the complete data is defined as

D i C = ( Z i j , δ i j k 1 , δ i j k 2 . δ i j k n , U i 1 , U i 2 . U i n )


δ i j k s = 1 i f s t h o b s e r v a t i o n o f Z i j k t h c o m p o n e n t 0 o t h e r w i s e

and U i s are independently distributed gamma variables.

McLachlan and Krishnan [32] have already discussed the application of the EM algorithm for ML estimation in case of single component t distribution. In ECME algorithm, this result has been extended to cover the present set up of a 3-component mixture of t distribution. For the sake of brevity, in this section we omit the suffix ij for all the variables. To define t distribution with mean μ, variance τ 2 and degrees of freedom ν, we proceed as follows:

IfZ|U=u, δ k s =1~N μ , τ 2 u andU~Γ ν 2 , ν 2

then marginally, Z~t ( μ , τ 2 , ν ) .

Following the above definition, the complete data likelihood L i C can be factorized a product of three terms-marginal densities of δ s, the conditional densities of U | δ , and conditional densities of Z | U = u , δ . In notation, the log-likelihood of the complete-data can be expressed as

log L C ( ψ ) = log L 1 C ( π ) + log L 2 C ( ν ) + log L 3 C ( θ )


log L 1 C π = k = 0 2 . s = 1 n δ k s log π k
log L 2 C ν = k = 0 2 . s = 1 n δ k s { - log Γ ν k 2 + 1 2 ν k log Γ ν k 2 + 1 2 ν k log u s - u s - log u s }


log L 3 C θ = k = 0 2 . s = 1 n δ k s { - 1 2 π k log 2 π - 1 2 τ k 2 - 1 2 u s z - μ k 2 τ k 2 }


To compute the E-step of the proposed algorithm, at (t+1)th step we need to calculate Q ( ψ ; ψ ( t ) ) , the current conditional expectation of the complete-data log likelihood function log L C ψ . From equation (4) to (7), we can write

Q ψ ; ψ t = Q 1 π ; ψ t + Q 2 ν ; ψ t + Q 3 θ ; ψ t


Q 1 π ; ψ t = k = 0 2 . s = 1 n E ψ t δ k s | z s = k = 0 2 . s = 1 n ξ k s ( t ) log π k


ξ k s ( t ) = p k t f Z s ; μ k t + 1 , τ 2 k t + 1 , ν k t + 1 f ( Z ; ψ t + 1 )

which is the posterior probability that Z belongs to the k-th component of the mixture based on current fit ψ t .


Q 2 ν ; ψ t = k = 0 2 . s = 1 n ξ k s ( t ) [ - log Γ ν k 2 + 1 2 ν k log Γ ν k 2 + 1 2 ν k { s = 1 n log u k s t - u k s t + ψ ν k t + 1 2 - log ( ν k t + 1 2 ) } ]


u k s t = ν k t + 1 v k t + Z s - μ k t 2 / τ k t

ψ ( . ) is a digamma function and

Q 3 ( θ ; ψ ( t ) ) = 2 k = 0 . n s = 1 ξ k s ( t ) [ 1 12 log ( 2 π ) + 1 12 log u k s ( t ) 1 12 u k s ( t ) ( ( Z s μ k ) / τ k ) 2 } ]


In usual M-step parameters π, ν, θ can be estimated by considering equations (10) - (12) independently. The new updates for π,θ can be obtained as a closed form solution whereas for ν an iterative procedure may be used using the following equations:

p k ( t + 1 ) = s = 1 n ξ k s t n
μ k t + 1 = s = 1 n ξ k s t u k s t Z s / s = 1 n ξ k s ( t ) u k s ( t )
τ k t + 1 = s = 1 n ξ k s t u k s t Z s - μ k τ k 2 / s = 1 n ξ k s t

and ν k t + 1 is the solution of the following equation

{ - ψ ν k 2 + log ν k 2 + 1 + 1 n k t s = 1 n ξ k s t log Γ ν k 2 + 1 2 ν k { s = 1 n log u k s t - u k s t + ψ ν k t + 1 2 - log ( ν k t + 1 2 ) } = 0

To get an efficient algorithm, let us partition ψ as ( ψ 1 ' , ψ 2 ' ) ' where ψ 1 contains all the parameters except parameters corresponding to degree of freedom of t-distributions. The above M-step is replaced by two CM-steps, as follows.

CM-Step 1. Keeping ψ 2 fixed, i.e. ν is fixed at ν t , maximize Q ψ ; ψ t to get ψ 1 t + 1

CM-Step 2. Now fix ψ 1 at ψ 1 t + 1 and calculate ψ 2 t + 1 by maximizing Q ψ ; ψ t

Furthermore to make the algorithm more efficient, after the first CM-step, we replace the E-step with ψ = ( ψ 1 ( t + 1 ) ' , ψ 2 ( t ) ' ) ' instead of ψ = ( ψ 1 ( t ) ' , ψ 2 ( t ) ' ) ' .

Simulation study

To assess the proposed methodology, a small sample simulation study has been performed. This gives an idea whether or not the parameters are well estimated and most importantly, they provide information of false discovery rates.

First we simulated a dominant model with 10,000 transcripts and 10 SNPs. The equivalently expressed (EE) transcripts are generated from N(0,1) after log-transformation. We have simulated the data under three choices of proportions of differentially expressed (DE) transcripts ( p 1 ). We have taken p 1 to be (0.01, 0.05, 0.10). If the transcript is DE, it has to be generated from N(4,0.5) after log-transformation. The controlled fdr are also assumed to be (0.01, 0.05, 0.10) for these data sets. For p 1 =0.05, the simulated data is given in Figure 1.

Figure 1
figure 1

A part of the simulated data for p 1 =0.05.

The impact of minor allele frequency (MAF) on the distributions under null has also been studied. Under null, for a t-distribution, the only parameter to be estimated is its degrees of freedom. The comparison has been made by computing different quantiles for six choices of MAFs. For the lower quantiles, they almost overlapped with each other. Very small deviations are observed for upper quantiles (Figure 2).

Figure 2
figure 2

Effect of minor allele frequency (MAF) on the null distribution. Only upper quantiles (from 80%) have been considered as lower quantiles showing almost no difference.

For the 10 SNPs, we fitted the null distribution using permutation method in a balanced way. From each group, randomly selecterd 35 samples are shifted from one group to the other and the value of the statistic is noted. This process is repeated 40 times and histograms are plotted. From the histograms, the degrees of freedom corresponding to the null distribution for eack SNP is estimates. To get an idea about the goodness-of-fit, Q-Q plots are done (Figure 3). These plots show that the null distribution is well approximated by the standardized t-distribution with appropriate degrees of freedom.

Figure 3
figure 3

QQ-plot for eight SNPs.

Parameters related to the mixture model (4) are estimated using proposed ECME algorithm after estimating the null distribution using permutation method. Then FDR is computed under both proposed parametric empirical Bayes and nonparmetic empirical Bayes setup and the result is given in Table 1.

Table 1 The True FDR Performance of Controlled FDR in EB Models

It is evident from the above table that the nonparmateric empirical Bayes is much conservative compared to its parametric alternative. For parametric set up, the true FDR is very much close to the controlled one, whereas, for nonparametric empirical Bayes these values are not so close as the true fraction of DE transcripts increases.

HLC data analysis

We applied the empirical Bayes model to analyze a sequencing data publicly available. In the current study, we have started with liver tissue data of 213 Caucasian samples from apreviously described human liver cohort (LHC) (Yang et al. [33]). To get the genotypes and gene expression profiles, DNA and RNA have been isolated. Illumina platform is used to get the expressions. After putting some filtration (MAF>5%, HWE<10-5,) we are left with 173 samples, 472,000 SNPs and 30,000 expressions.

The distribution of minor allele frequency (MAF) over SNPs is given in the histogram (Figure 4). For all possible SNP-transcript combinations, test statistic, Z i j s are computed. We fit the mixture model using the ECME algorithm in R 2.15.1 after estimating the null distribution using permutation method. However, due to high dimension data, it becomes very difficult to fit a mixture model using the proposed algorithm. For the sake of parsimony, we further filtered the data and ECME algorithm is used for only top SNPs with p-value<1 0 - 3 . For these top SNPs, the mixture model is fitted and estimates are obtained. To compute lfdr and FDR from (5) and (6) respectively, these estimates are used.

Figure 4
figure 4

Minor allele frequency (MAF) distribution. X axis corresponds to minor allele frequency 25% to 50%.


To compare our result with [33], we focus on 18 of the 54 P450 genes used in the study. These are CYP3A5, CYP2D6, CYP4F12, CYP2E1, CYP2U1, CYP1B1, CYP2C18, CYP4F11, CYP4V2, CYP2F1, CYP39A1, CYP26C1, CYP2C19, CYP2C9, CYP2S1, CYP46A1, CYP4A11 and CYP4X1.However our method fails to identify a single SNP with FDR<10% for CYP2R1 and that gene symbol has been excluded from the table (Table 2). It can be seen from the table (Table 2) that for a threshold of 10% FDR number of significant eQTL pairsis4916.Since we have considered only top SNPs, this may be an overestimate. SNPs which are within <1-Mb distance from gene location are defined as cis-SNPs. It is interesting to note that, among these 18 genes, the first five (CYP3A5, CYP2D6, CYP4F12, CYP2E1 and CYP2U1) having more than 40 cis-SNPs. In all cases FDR based analysis results in identifying more cis-SNPs for these 18 genes compared to that of Yang et al. (2010) [33].

Table 2 Number of eQTL pairs after crossing the threshold of FDR


In contrast to previously available methods based on p-values, the empirical Bayes method uses local false discovery rate (lfdr) as the threshold. This method controls false positive rate. For a particular SNP, the lfdr is computed for the site-specific evidence whereas the FDR averages over other sites with stronger evidence. There are some limitations of using FDR which may result in misleading inferences in genome studies. In such a situation, it is better to use lfdr which is a bit difficult to estimate compared to FDR.However there is still one computational problem which needs much attention. Due to the high dimensionality in the data, sometimes existing algorithms fail. This necessitates the need to find some more efficient algorithms. The choice of threshold FDR value is an important deciding factor in such studies. It would be interesting to see, how number of cis-SNPs vary with the change in FDR threshold. In this way FDR criterion can be used to estimate number of SNPs that we may need to consider.


  1. Liu , Chunyu : Brain expression quantitative trait locus mapping informs genetic studies of psychiatric diseases. Neurosci Bull. 2011, 27 (2): 123-133. 10.1007/s12264-011-1203-5.

    Article  PubMed  PubMed Central  Google Scholar 

  2. Benjamini Y, Hochberg Y: Controlling the false discovery rate: a practical and powerful approach to multiple testing. Journal of the Royal Statistical Society. Series B (Methodological). 1995, 289-300.

    Google Scholar 

  3. Efron B, Storey J, Tibshirani R: Microarrays, empirical Bayes methods, and false discovery rates. Stanford Technical Report. 2001

    Google Scholar 

  4. Efron B, Tibshirani R, Storey JD, Tusher V: Empirical Bayes analysis of a microarray experiment. Journal of the American statistical association. 2001, 96 (456): 1151-1160. 10.1198/016214501753382129.

    Article  Google Scholar 

  5. Efron B, Tibshirani R: Empirical Bayes methods and false discovery rates for microarrays. Genetic epidemiology. 2002, 23 (1): 70-86. 10.1002/gepi.1124.

    Article  PubMed  Google Scholar 

  6. Newton MA, Kendziorski CM, Richmond CS, Blattner FR, Tsui KW: On differential variability of expression ratios: improving statistical inference about gene expression changes from microarray data. Journal of Computational Biology. 2001, 8: 37-52. 10.1089/106652701300099074.

    Article  PubMed  CAS  Google Scholar 

  7. Lee MLT, Kuo FC, Whitmore GA, Sklar J: Importance of replication in microarray gene expression studies: statistical methods and evidence from repetitive cDNA hybridizations. Proceedings of the National Academy of Sciences. 2000, 97 (18): 9834-9839. 10.1073/pnas.97.18.9834.

    Article  CAS  Google Scholar 

  8. Kendziorski CM, Zhang Y, Lan H, Attie A: The efficiency of MRNA pooling in microarray experiments. Biostatistics. 2003, 4: 465-477. 10.1093/biostatistics/4.3.465.

    Article  PubMed  CAS  Google Scholar 

  9. Kendziorski CM, Newton MA, Lan H, Gould MN: On parametric empirical Bayes methods for comparing multiple groups using replicated gene expression profiles. Stat Med. 2003, 22: 3899-33914. 10.1002/sim.1548.

    Article  PubMed  CAS  Google Scholar 

  10. Kendziroski CM, Chen M, Yuan M, Lan H, Attie AD: Statistical methods for expression quantitative trait loci (eQTL) mapping. Biometrics. 2006, 62 (1): 19-27. 10.1111/j.1541-0420.2005.00437.x.

    Article  Google Scholar 

  11. Gelfond JAL, Ibrahim JG, Zou F: Proximity Model for Expression Quantitative Trait Loci (eQTL) Detection. Biometrics. 2007, 63: 1108-1116. 10.1111/j.1541-0420.2007.00778.x.

    Article  PubMed  CAS  Google Scholar 

  12. Lo K, Gottardo R: Flexible empirical Bayes models for differential gene expression. Bioinformatics. 2007, 23 (3): 328-335. 10.1093/bioinformatics/btl612.

    Article  PubMed  CAS  Google Scholar 

  13. Sánchez-Linares I, Pérez-Sánchez H, Cecilia JM, García JM: High-Throughput parallel blind Virtual Screening using BINDSURF. BMC Bioinformatics. 2012, 13 (Suppl 14): S13-10.1186/1471-2105-13-S14-S13.

    Article  PubMed  PubMed Central  Google Scholar 

  14. Bergemann TL, Wilson J: Proportion statistics to detect differentially expressed genes: a comparison with log-ratio statistics. BMC Bioinformatics. 2011, 12: 228-10.1186/1471-2105-12-228.

    Article  PubMed  PubMed Central  Google Scholar 

  15. Ruan L, Yuan M: An Empirical Bayes' Approach to Joint Analysis of Multiple Microarray Gene Expression Studies. Biometrics. 2011, 67 (4): 1617-1626. 10.1111/j.1541-0420.2011.01602.x.

    Article  PubMed  Google Scholar 

  16. Efron B, Morris C: Combining possibly related estimation problems (with discussion). Journal of the Royal Statistical Society, Series B. 1973, 35: 379-421.

    Google Scholar 

  17. Efron B, Morris C: Stein's paradox in statistics. Scientific American. 1977, 236: 119-127. 10.1038/scientificamerican0577-119.

    Article  Google Scholar 

  18. Bar H, Booth J, Schifano E, Wells MT: Laplace approximated EM microarray analysis: an empirical Bayes approach for comparative microarray experiments. Statistical Science. 2010, 25 (3): 388-407. 10.1214/10-STS339.

    Article  Google Scholar 

  19. Wright GW, Simon RM: A random variance model for detection of differential gene expression in small microarray experiments. Bioinformatics. 2003, 19: 2448-2455. 10.1093/bioinformatics/btg345.

    Article  PubMed  CAS  Google Scholar 

  20. Cui X, Hwang JG, Qiu J, Blades NJ, Churchill GA: Improved statistical tests for differential gene expression by shrinking variance components estimates. Biostatistics. 2005, 6 (1): 59-75. 10.1093/biostatistics/kxh018.

    Article  PubMed  Google Scholar 

  21. Lönnstedt I, Grant S, Begley G, Speed TP: Microarray analysis of two interacting treatments: a linear model and trends in expression over time. 2001, Technical Report, Department of Mathematics, Uppsala University, Sweden

    Google Scholar 

  22. Tai YC, Speed TP: A multivariate empirical Bayes statistic for replicated microarray time course data. The Annals of Statistics. 2006, 34 (5): 2387-2412. 10.1214/009053606000000759.

    Article  Google Scholar 

  23. Lonnstedt I, Speed T: Replicated microarray data. StatisticaSinica. 2002, 12: 31-46.

    Google Scholar 

  24. Newton MA, Kendziorski CM: Parametric empirical Bayes methods for microarrays. The analysis of gene expression data: methods and software. 2003, 254-271.

    Chapter  Google Scholar 

  25. Liu C, Rubin DB: The ECME algorithm: a simple extension of EM and ECM with faster monotone convergence. Biometrika. 1994, 81 (4): 633-648. 10.1093/biomet/81.4.633.

    Article  Google Scholar 

  26. Cui X, Churchill GA: Statistical tests for differential expression in cDNA microarray experiments. Genome Biol. 2003, 4 (4): 210-10.1186/gb-2003-4-4-210.

    Article  PubMed  PubMed Central  Google Scholar 

  27. Noma H, Matsui S: The optimal discovery procedure in multiple significance testing: an empirical Bayes approach. Statistics in Medicine. 2012, 31 (2): 165-176. 10.1002/sim.4375.

    Article  PubMed  Google Scholar 

  28. Efron B: Robbins, empirical Bayes and microarrays. The annals of Statistics. 2003, 31 (2): 366-378. 10.1214/aos/1051027871.

    Article  Google Scholar 

  29. Lange KL, Little RJ, Taylor JM: Robust statistical modeling using the t distribution. Journal of the American Statistical Association. 1989, 84 (408): 881-896.

    Google Scholar 

  30. Liu C, Rubin DB: The ECME algorithm: a simple extension of EM and ECM with faster monotone convergence. Biometrika. 1994, 81 (4): 633-648. 10.1093/biomet/81.4.633.

    Article  Google Scholar 

  31. Meng XL, Rubin DB: Maximum likelihood estimation via the ECM algorithm: A general framework. Biometrika. 1993, 80 (2): 267-278. 10.1093/biomet/80.2.267.

    Article  Google Scholar 

  32. McLachlan G, Krishnan T: The EM Algorithm and Extensions. Wiley Series in Probability and Statistics. 1997

    Google Scholar 

  33. Yang X, Zhang B, Lum PY: Systematic genetic and genomic analysis of cytochrome P450 enzyme activities in human liver. Genome Research. 2010, 20 (8): 1020-1036. 10.1101/gr.103341.109.

    Article  PubMed  CAS  PubMed Central  Google Scholar 

Download references


This work is supported by the U.S. National Institutes of Health grants R01 GM74217 (Lang Li) and AHRQ Grant R01HS019818-01 (MalazBoustani)


The publication costs were funded by the authors through P50 CA113001 (Huang, T.M.), R01 GM088076 (Skaar, T.), R01 HS019818 (Dexter).

This article has been published as part of BMC Genomics Volume 14 Supplement 8, 2013: Selected articles from the International Conference on Intelligent Biology and Medicine (ICIBM 2013): Genomics. The full contents of the supplement are available online at

Author information

Authors and Affiliations


Corresponding author

Correspondence to Lang Li.

Additional information

Competing interests

The authors declare that they have no competing interests.

Rights and permissions

Open Access This article is published under license to BioMed Central Ltd. This is an Open Access article is distributed under the terms of the Creative Commons Attribution License ( ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited. The Creative Commons Public Domain Dedication waiver ( ) applies to the data made available in this article, unless otherwise stated.

Reprints and Permissions

About this article

Cite this article

Chakraborty, A., Jiang, G., Boustani, M. et al. Simultaneous inferences based on empirical Bayes methods and false discovery rates ineQTL data analysis. BMC Genomics 14 (Suppl 8), S8 (2013).

Download citation

  • Published:

  • DOI: