 Research article
 Open Access
 Published:
A robust penalized method for the analysis of noisy DNA copy number data
BMC Genomics volume 11, Article number: 517 (2010)
Abstract
Background
Deletions and amplifications of the human genomic DNA copy number are the causes of numerous diseases, such as, various forms of cancer. Therefore, the detection of DNA copy number variations (CNV) is important in understanding the genetic basis of many diseases. Various techniques and platforms have been developed for genomewide analysis of DNA copy number, such as, arraybased comparative genomic hybridization (aCGH) and highresolution mapping with highdensity tiling oligonucleotide arrays. Since complicated biological and experimental processes are often associated with these platforms, data can be potentially contaminated by outliers.
Results
We propose a penalized LAD regression model with the adaptive fused lasso penalty for detecting CNV. This method contains robust properties and incorporates both the spatial dependence and sparsity of CNV into the analysis. Our simulation studies and real data analysis indicate that the proposed method can correctly detect the numbers and locations of the true breakpoints while appropriately controlling the false positives.
Conclusions
The proposed method has three advantages for detecting CNV change points: it contains robustness properties; incorporates both spatial dependence and sparsity; and estimates the true values at each marker accurately.
Background
Deletions and amplifications of the human genomic DNA copy number are the causes of numerous diseases. They are also related to phenotypic variation in the normal population. Therefore, the detection of DNA copy number variation (CNV) is important in understanding the genetic basis of disease, such as, various types of cancer. Several techniques and platforms have been developed for genomewide analysis of DNA copy number, including comparative genomic hybridization (CGH), arraybased comparative genomic hybridization (aCGH), single nucleotide polymorphism (SNP) arrays and highresolution mapping using highdensity tiling oligonucleotide arrays (HRCGH) [1–5]. These platforms have been used with microarrays. Each microarray consists of tens of thousands of genomic targets or probes, sometimes referred to as markers, which are spotted or printed on a glass surface. During aCGH analysis, a DNA sample of interest (test sample), and a reference sample are differentially labelled with dyes, typically Cy3 and Cy5, and mixed. The combined sample is then hybridized to the microarray and imaged, which results in the test and reference intensities for all the markers. The goal of the analysis of DNA copy number data is to partition the whole genome into segments where copy numbers change between contiguous segments, and subsequently to quantify the copy number in each segment. Therefore, identifying the locations of copy number changes is a key step in the analysis of DNA copy number data.
Several methods have been proposed to identify the breakpoints of copy number changes. A genetic local search algorithm was developed to localize the breakpoints along the chromosome [6]. A binary segmentation procedure (CBS) was proposed to look for two breakpoints at a time by considering the segment as a circle [7]. An unsupervised hidden markov model (HMM) approach was used to classify each chromosome into different states representing different copy numbers [8]. A hierarchical clustering algorithm was studied to select interesting clusters by controlling the false discovery rate (FDR) [9]. A wavelets approach for denoising the data was used to uncover the true copy number changes [10]. The performances of these methods were carefully compared [11].
Recently, several penalized regression methods have been proposed for detecting change points. In the framework of penalized regression, a least squares (LS) regression model was used with the least absolute penalty on the differences between the relative copy numbers of the neighboring markers [12]. This model was called the Lasso based (LB) model since it can be recast into LS regression with the Lasso penalty [13]. The LB model imposes some smoothness properties on the relative copy numbers along the chromosome. However, it does not take into account the sparsity in the copy number variations. Here the smoothness means that the nearby markers tend to have the same intensities and there is only a few markers where changes occur; the sparsity means that only a small number of markers have some nonzero intensities. A penalized LS regression with fused lasso penalty (LSFL) was proposed to detect "hot spot" in a CGH data [14, 15]. This method is applied to incorporate both sparsity and smoothness properties of the data. It is wellknown that the solutions based on LS framework can be easily distorted by a single outlier. Both LB and LSFL methods lack robust properties when the data does not have a nice distribution. Considering the possible data contamination in a microarray experiment, quantile regression with Lasso (Quantile LB) method was studied for the noisy array CGH data [16, 17]. However, when the data is sparse, the Quantile LB method does not incorporate the sparsity property of the data sets and then tends to identify change points false positively.
In this manuscript, we propose a penalized LAD regression with the adaptive fused lasso penalty to analyze the noisy data sets. We name this method as the LADaFL. The proposed LADaFL method has three advantages in detecting CNV change points. First, it is expected to be resistant to outliers by using the LAD loss function. Second, the adaptive fused lasso penalty can incorporate both spatial dependence and sparsity properties of CNV data sets into the analysis. Third, the adaptive procedure is expected to significantly improve the estimates of the true intensity at each marker.
Methods
LADaFL model for CNV analysis
For a CGH profile array, let y_{ i }be the log2 ratio of the intensity of the red over green channels at marker i on a chromosome, where the red and green channels measure the intensities of the test (e.g. cancer) and reference (e.g. normal) samples. We assume that those intensities have been properly normalized. Let β_{ i }be the true relative copy number and u_{ i }(= β_{ i } β_{i 1}) be the true jump value at marker i respectively. For the notation's convenience, we denote β_{0} = 0 and thus u_{1} = β_{1}. The observed y_{ i } can be considered to be a realization of β_{ i } at marker i with a random noise,
where n is the number of markers on a given chromosome. Our task is to make inference about β_{ i } 's based on the observed y_{ i } 's. There are three possible factors in model (1). First, there may be outliers in the observed data, so a robust procedure is needed. Second, the real β_{ i } 's have the spatial dependence because the true relative copy numbers of the nearby markers are the same except in the regions where the relative copy numbers change abruptly. Third, copy number changes only occur at a few locations in the chromosome; most of the β_{ i } 's should be zero. Based on those three factors, we propose the criterion
Here, λ_{1} and λ_{2} are two tuning parameters controlling the sparsity and smoothness of the estimates, ${a}_{i}(=1/{\hat{\beta}}_{i}^{(0)})$ and ${b}_{i}(=1/{\hat{u}}_{i}^{(0)})$ are the weights of two penalties from any consistent initial estimates ${\hat{\beta}}_{i}^{(0)}$ and ${\hat{u}}_{i}^{(0)}$. A LADaFL estimator of β(=(β_{1}, ⋯, β_{n})') is the value $\hat{\beta}$ that minimizes (2). In this criterion, we use the absolute loss to reduce the influence of outliers; we use the adaptive fused Lasso penalty, an adaptive version of the fused Lasso penalty, to measure both sparsity and smoothness properties of β_{ i } 's in a CGH data set. By penalizing the term ${\sum}_{i=1}^{n}{a}_{i}}\left{\beta}_{i}\right$ in (2), the sparse solution ${\hat{\beta}}_{i}$'s is expected to have some oracle properties under some conditions [18]. One can understand the oracle properties in the way that the estimates of true nonzero β_{i}'s in the full model are as well as if the true zero β_{i}'s are given in advance. If we rewrite (2) as a regression problem of u_{ i } 's, then the term ${\sum}_{i=2}^{n}{b}_{i}}{\beta}_{i}{\beta}_{i1}(={\displaystyle {\sum}_{i=2}^{n}{b}_{i}}{u}_{i})$ provides a measurement of the sparsity of the parameters u_{ i } 's, which reflects the spatial dependence of the true β_{ i } 's. By penalizing this term, the sparse solution ${\hat{u}}_{i}$'s are expected to have some oracle properties under some conditions.
In our study, we set the initial values of ${\hat{\beta}}^{(0)}$ to be a regular LAD estimator. In other words, ${\hat{\beta}}_{i}^{(0)}$ = y_{ i } for i = 1, 2, ⋯, n and ${\hat{u}}_{i}^{(0)}$ = y_{ i } y_{i 1}for i = 2, ⋯, n.
Computation
Let y = (y_{1}, ⋯, y_{ n })' and a n × n diagonal matrix ${U}_{{\lambda}_{1}}$ = diag (a_{1}λ_{1}/2, a_{2}λ_{1}, ⋯, a_{ n } λ_{1}). Define a n × n matrix ${V}_{{\lambda}_{1}}{,}_{{\lambda}_{2}}$ as
Consider a new response vector y* = (y', 0', 0')' and a new design matrix ${X}^{*}=[I,{{U}^{\prime}}_{{\lambda}_{1}},{{V}^{\prime}}_{{\lambda}_{1},{\lambda}_{2}}{]}^{\prime}$, we rewrite (2) as
For every fixed λ_{1} and λ_{2}, (3) is the objective function of a LAD regression problem with a new sparse design matrix X*. Therefore, an existing program such as the R quantreg package can be used to compute $\hat{\beta}$.
Determining the tuning parameters
The magnitude of tuning parameters λ_{1} and λ_{2} determine the smoothness and sparsity of the estimates ${\hat{\beta}}_{i}$'s. In one extreme, if λ_{1} = 0 and λ_{2} = 0, then the estimate of β_{ i }is simply y_{ i }, which obviously leads to too many estimated nonzero relative ratios. In the other extreme, if λ_{1} is very large, then all ${\hat{\beta}}_{i}$'s are forced to be zero regardless of the data, which is not reasonable.
We provide a fast algorithm to choose tuning parameters in LADaFL. For every fixed combo of λ_{1} and λ_{2}, we obtain a LADaFL solution, ${\hat{\beta}}_{i}$'s, and the complexity of the model, $\widehat{df}$. Let ${A}_{1}=\{1\le i\le n;{\hat{\beta}}_{i}=0\}$, ${A}_{2}=\{1\le i\le n;{\hat{\beta}}_{i}={\hat{\beta}}_{i}{}_{1},\mathrm{max}\{\left{\hat{\beta}}_{i}\right,\left{\hat{\beta}}_{i1}\right\}>0\}$. If we assume that the cardinalities of A_{1} and A_{2} are m_{1} and m_{2} separately, then $\widehat{df}$ = n  m_{1}  m_{2}[19]. Our analysis shows that the Schwarz information criterion (SIC) works relatively conservative for analyzing the CGH data because of the small number of changes in a data set [20]. We modify SIC as
where q ≥ 1 is a userdefined SIC factor. Larger q tends to choose a more parsimonious model. We search the tuning parameters λ_{1} and λ_{2} using the following two steps.

1.
Let q = q _{1} with q _{1} ≥ 1. For a fixed small value of λ_{1}, say λ_{1} = 0.001, we search the "best" λ_{2} from a uniform grid to minimize SIC.

2.
Let q = q _{2} with q _{2} ≥ 1. For the above "best" λ_{2}, we increase λ_{1} by a small increment from a uniform grid and search a "best" one to minimize SIC.
Here λ_{2} controls the frequency of alteration region, and λ_{1} controls the number of nonzero log2 ratios. Noticing that there are much less number of alterations than the number of nonzero log2 ratios in a CGH array data set, we can select λ_{2} more aggressively by choosing q_{1} = 1.5 and q_{2} = 1 in our computation.
Even though many cancer profiles contain large size of aberrations, which do not have the sparsity in their relative intensities data sets, the existence of the sparsity of the jumps (only a few jumps exists for the relative intensities) still favors the penalized method. To reflect the true relative intensities accurately, we can choose a small λ_{1}, say, λ_{1} = 0.001. Our simulations show that LADaFL is significantly efficient in mapping these true segments.
Estimation of FDR
Let ${\hat{\beta}}_{i}$ be the LADaFL estimate using the above SIC strategy and ${\hat{\mu}}_{i}(={\hat{\beta}}_{i}{\hat{\beta}}_{i1})$ be the estimated jump at marker i. The set {1 ≤ i ≤ n : ${\hat{u}}_{i}$ ≠ 0} includes all the potential breakpoints. However, some of the nonzero estimated jumps may not be significant and can lead to false positives. We often treat the question of whether there is a significant copy number change at a position as a hypothesis testing problem [12, 15]. The null hypothesis is that the marker i does not belong to any gain/loss region. When all the positions are investigated simultaneously, it becomes a multiple testing problem. In this multiple testing problem, FDR is defined as the expectation of the proportion of false positive results, which can be estimated by the number of markers picked under null hypothesis divided by the number of markers picked in the observed data [21–23].
Suppose all nonzero estimates ${\hat{u}}_{i}$'s divide a CGH array into K segments, S_{1}, S_{2}, ⋯, S_{ K } . The k th segment S_{ k } , 1 ≤ k ≤ K, includes n_{ k } markers and has sample median ${\tilde{y}}_{k}$. The hypothesis of interest is
We consider the test statistic
where ${\stackrel{\tilde{^}}{\beta}}_{k}$ is the median of all estimated copy number ${\hat{\beta}}_{i}$'s in the k th segment and $\hat{f}(0)$ is an estimate of the ordinary of error distribution at 0 in model (1). Using Cox and Hinkley's approach, we have$\hat{f}(0)=(ts)/[n({\hat{e}}_{(t)}{\hat{e}}_{(s)})]$, where ${\hat{e}}_{(i)}$'s are ordered sample residuals and t and s are symmetric about the index of the median sample residuals. Thus ${\hat{z}}_{k}$ is approximated to be a standard normal distribution under ${H}_{0}^{k}$[24]. A conservative estimator of FDR for a given cutoff value p ϵ (0, 1) is,
where ${p}_{k}=P(N(0,1)>{\hat{z}}_{k})$. In our study, we choose p = 0.002 without other specification.
Detection the breakpoints
The procedure of detecting breakpoints can be summarized into two steps.

S1.
First we use the SIC to compute ${\hat{\beta}}_{i}$'s and ${\hat{u}}_{i}$'s. All markers where both ${\hat{u}}_{i}$ ≠ 0 and ${\hat{\beta}}_{i}$ > b_{0} are identified as the candidates of breakpoints, where b_{0} is an empirical cutoff threshold for possible amplifications and deletions. Some work suggested that the possible chromosome amplifications and deletions should satisfy log2ratio> 0.225, which is corresponding to values between 2 and 3 standard deviations from the mean [25]. We choose b_{0} = 0.1 conservatively in our experiment.

S2.
For the potential breakpoints in S1, we calculate pvalues and estimate FDR. The significant breakpoints are identified by controlling FDR.
Results and Discussion
Simulation studies
We evaluate the performance of the LADaFL method for detecting CNV using three simulation examples. In the first two examples, we consider 500 markers equally spaced along a chromosome.
All observed log2 ratios are generated from
where β_{0i}'s are the true log2 ratios of all 500 markers which have three altered regions corresponding to quadraploid, triploid and monoploid states. Similar to [12], we generate random noises ε_{ i } 's from AR(2), AR(1) and independent models, respectively.
Example 1. To demonstrate the performance of the LADaFL method under both sparsity and smoothness conditions, we set the true log2 ratios β_{0i}'s in (4) to be significantly sparse as in Table1. We generate ε_{ i }'s from the following three models such that they have the same standard deviations.
Independent: ε_{ i }= e_{i 0},
AR (1): ε_{ i } = 0.60ε_{i 1}+ e_{i 1},
AR (2): ε_{ i } = 0.60ε_{i 1}+ 0.20ε_{i 2}+ e_{i 2},
where e_{i 0}~ N(0, 0.065^{2}), e_{i 1}~ N(0, 0.082^{2}), and e_{i 2}~ N(0,0.1^{2}) for i = 1, ⋯, 500.
Example 2. In this example, we use the same β_{0i}'s as in Example 1. However, to evaluate the robustness property of the LADaFL estimator, we simulate e_{ ij }'s from double exponential (DE) distributions such that ε_{ i }'s have equal standard deviation 0.1.
Independent: ε_{ i } = e_{i 0},
AR (1): ε_{ i } = 0.60ε_{i 1}+ e_{i 1,}
AR (2): ε_{ i }= 0.60ε_{i 1}+ .20ε_{i 2}+ e_{i 2},
where e_{i 0}~ DE(0, 0.0707), e_{i 1}~ DE(0, 0.0566) and e_{i 2}~ DE(0, 0.0460) for i = 1, ⋯, 500.
We generate 40 data sets for each model defined in Examples 1 and 2. Our simulated data sets are sparse with two amplifications and one deletion, and only 5 true breakpoints for each data set. Both LADaFL and LSFL method are applied to all three models. In Figure 1, we plot a sample data from Example 2 with both the LADaFL and LSFL estimates. The simulation results are summarized in Table 2. For each model, we calculate the average number and standard deviation of all detected breakpoints from 40 data sets. The average number of correctly and falsely detected breakpoints are also reported.
Our simulation results show that the LADaFL method can detect the copy number variations with significant accuracy. Compared to the LSFL method, LADaFL is more stable and robust, even if the simulated data is generated from an independent model. The LSFL method tends to oversmooth the data set and does not have the robust property. To contain some robust properties, the Loess technique was imposed [15]. Our simulation results show that the LSFL method with the Loess technique is unstable and may miss many significant breakpoints when the data is significantly sparse. For example, for AR(2) model in Example 2, out of 5 true breakpoints, LADaFL detect 5.275 breakpoints on average with standard deviation 0.598, while LSFL only detect 2.850 breakpoints on average with standard deviation 1.189.
In Table 2, we also provide the simulation results from the LADFL method. The LADFL method is comparable to the LSFL with Loess in Example 1 and competent to the LSFL with Loess in Example 2; it can be explained by the natural robust property of the LAD part. Furthermore, due to the adaptive procedure, the LADaFL is more accurate than the LADFL in detecting the significant breakpoints in both examples.
In the following Example 3, we apply LADaFL to large size aberrations with 10,000 markers equally spaced along a chromosome.
Example 3. We simulate e_{ ij }'s from AR(1) model in Example 2. We consider three cases of large aberrations containing 99.8%, 80% and 50% of the probes, respectively, in each profile.
We summarize the simulation results in Table 3. In all three cases, LADaFL can detect the breakpoints accurately. Furthermore, LADaFL significantly improves the estimation of the relative intensities for all large aberrations. The sample estimation results of three data sets, with one in each case, are plotted in Figure 2. It is observed that LADaFL reflects the true segments and intensities accurately.
We investigate the estimate of FDR in using above examples. For example, if we control FDR rate at level 0.002, out of 100 iterations of model AR(1) in Example 2 and Case I in Example 3, 90% and 95% of the them have true FDR less than 0.002, respectively.
Furthermore, we perform the sensitivity analysis of the LADaFL model regarding the cutoff values. In Figure 3, we plot three Receiver Operator Characteristic (ROC) curves for AR(1) and AR(2) models in Example 2 and Case I in Example 3, respectively. We can see that LADaFL capture DNA copy number alterations best for AR(1) model in Example 2 and worst for Case I in Example 3.
Bacterial Artificial Chromosome (BAC) array
The BAC data set consists of single experiments on 15 fibroblast cell lines [25]. Each array contains measurements for 2276 mapped BACs spotted in triplicates. There are either one or two alterations in each cell line as identified by spectral karyotyping with 15 partial and 8 whole chromosomal alterations. The variable used for analysis is the normalized average of the log2 ratio of test sample over reference sample.
We applied both LADaFL and LSFL to four chromosomes. Chromosome 8 of GM03134, Chromosome 14 of GM01750, Chromosome 22 of GM13330, and Chromosome 23 of GM03563. Results are demonstrated in Figure 4. Consistent to the Karyotyping method, LADaFL detects breakpoints for both Chromosome 14 of GM01750 and Chromosome 8 of GM03134. However, LSFL tends to oversmooth the estimation around the potential breakpoints and cannot detect any breakpoints. In addition, no breakpoint is detected by LADaFL for Chromosome 23 of GM03563 and Chromosome 22 of GM13330, which is also consistent with the result obtained from the Karyotyping method. However, breakpoints are detected by LSFL for these two chromosomes.
Colorectal cancer data
Colorectal cancer data was reported and analyzed for the genomic alterations in tumors of colorectal cancer [16, 17, 25]. All 125 aCGH DNA data sets are collected using a BAC clone library with clones 1.5 Mb apart and a twocolor system with a common reference sample. The available data sets are normalized log2ratios of sample versus reference per array. There are 133 clones in Chromosome 1. We apply the LADaFL to Chromosome 1 in samples X59, X524, X186 and X204. In Figure 5, we plot the estimates of true intensities generated from LADaFL. Even though DNA alterations are very common among these aCGH arrays, LADaFL can still identify both weak as well as stronger DNA alterations. For example, both X186 and X204 data have unclear pattern, LADaFL realizes of the true log2 ratios and reports some weak alterations.
Human chromosome 22q11 data
Highresolution CGH (HRCGH) technology was applied to analyze CNVs on chromosome 22q11 [5]. The DNA samples were collected from patients who have CatEye syndrome, 22q11 deletion syndrome (also called velocardiofacial syndrome or DiGeorge syndrome) and some other symptoms. A large proportion of 22q11DS patients develop learning disabilities and attentiondeficit hyperactivity disorder with large variations in the symptoms of the disease. For example, patients 03154 and 97237 had the typical LCR A → D deletion, but they exhibited considerable variation in their symptoms, which might be linked to the deletion size. Therefore, it warrants development of a method which can accurately detect those sizes of deletion regions.
These Human chromosome 22q11 data sets consist of the measurements on chromosome 22 of 12 patients with approximaately 372,000 features in the microarray data sets for each patient. In order to apply the LADaFL method, we partitioned the whole chromosome into several segments and then applied the method to each segment. We selected the cutoff value of p as 0.0001 since the data set is significantly large and sparse. The LADaFL method identified all the blocks previously detected. It also detected the breakpoints for DNA block deletion and amplification. Figure 6 gives the results of the data from patients 03154 and 97237. This plot indicates the different deletion sizes in the two patients. In addition, Patient 03154 appears to have other deleted regions which was not previously detected [5].
Conclusions
We propose to use a smoothing technique, LADaFL to detect the breakpoints, and then divide all the probes into different segments for a noisy CGH data. Very recently, a median smoothing median absolute deviation method (MSMAD) was proposed to improve the performance of breakpoints detection [26]. One can incorporate the LADaFL smoother easily into the median absolute deviation process.
The appealing features of the proposed LADaFL method include its resistance against outliers, its improved accuracy in mapping the true intensities and the fast and accurate computation algorithm. The robustness property is inherited from LAD regression, which significantly reduces the possibility of false positives due to outlying intensity measurements. These properties are demonstrated in the generating models used in our simulation studies. The adaptive fused Lasso penalty in the LADaFL method incorporates both sparsity and smoothness properties of the copy number data. The adaptive procedure generates the solutions with some oracle properties. Computationally, the LADaFL estimator can be computed by transform to a unpenalized LAD regression, since both the loss and penalty functions use the same l_{1} norm. Our simulation and real data analysis indicate that the LADaFL method is a useful and robust approach for CNV analysis. However, there are some important questions which requires further investigation. For example, in the proposed LADaFL method, it is assumed that the reported intensity data is properly normalized. It would be useful to examine the sensitivity of the method for different normalization procedures, or perhaps consider the possibility of incorporating normalization into an integrated model. Furthermore, regarding the theoretical properties of LADaFL, it would be of interest to consider under what conditions of the smoothness and sparsity of the underlying copy number the LADaFL is able to correctly detect the breakpoints with high probability.
References
 1.
Kallioniemi A, Kallioniemi OP, Sudar D, Rutovitz D, Gray JW, Waldman F, Pinkel D: Comparative genomic hybridization for molecular cytogenetic analysis of solid tumors. Science. 1992, 258: 818821. 10.1126/science.1359641.
 2.
Pinkel D, Segraves R, Sudar D, Clark S, Poole I, Kowbel D, Collins C, Kuo WL, Chen C, Zhai Y, Dairkee SH, Ljung BM, Gray JW, Albertson DG: High resolution analysis of DNA copy number variation using comparative genomic hybridization to microarrays. Nat Genet. 1998, 20: 207211. 10.1038/2524.
 3.
Snijders AM, Nowak N, Segraves R, Blackwood S, Brown N, Conroy J, Hamilton G, Hindle AK, Huey B, Kimura K, Law S, Myambo K, Palmer J, Ylstra B, Yue JP, Gray JW, Jain AN, Pinkel D, Alberston DG: Assembly of microarrays for genomewide measurement of DNA copy number. Nat Genet. 2001, 29: 263264. 10.1038/ng754.
 4.
Zhao XJ, Li C, Paez JG, Chin K, Jänne PA, Chen TH, Girard L, Minna J, Christiani D, Leo C, Gray JW, Sellers WR, Meyerson M: An integrated view of copy number and allelic alterations in the cancer genome using single nucleotide polymorphism arrays. Cancer Res. 2004, 64: 30603071. 10.1158/00085472.CAN033308.
 5.
Urban AE, Korbel JO, Selzer R, Richmond T, Hacker A, Popescu GV, Cubells JF, Green R, Emanuel BS, Gerstein MB, Weissman SM, Snyder M: Highresolution mapping of DNA copy alterations in human chromosome 22 using highdensity tiling oligonucleotide arrays. PNAS. 2006, 103: 45344539. 10.1073/pnas.0511340103.
 6.
Jong K, Marchiori E, van der Vaart A, Ylstra B, Weiss M, Meijer G: Chromosomal breakpoint detection in human cancer. Applications of Evolutionary Computing. EvoBIO: Evolutionary Computation and Bioinformatics. 2003, Springer LNCS, 2611: 107116.
 7.
Olshen AB, Venkatraman ES, Lucito R, Wigler M: Circular binary segmentation for the analysis of arraybased DNA copy number data. Biostatistics. 2004, 5: 557572. 10.1093/biostatistics/kxh008.
 8.
Fridlyand J, Snijders AM, Pinkel D, Albertson DG, Jain AN: Hidden Markov models approach to the analysis of the array CGH data. J Multiv Anal. 2002, 90: 132153. 10.1016/j.jmva.2004.02.008.
 9.
Wang P, Kim Y, Pollack J, Narasimhan B, Tibshirani R: A method for calling gains and losses in array CGH data. Biostatistics. 2005, 6: 4558. 10.1093/biostatistics/kxh017.
 10.
Hsu L, Self SG, Grove D, Randolph T, Wang K, Delrow JJ, Loo L, Porter P: Denoising array based comparative genomic hybridization data using wavelets. Biostatistics. 2005, 6: 211226. 10.1093/biostatistics/kxi004.
 11.
Lai WR, Johnson MD, Kucherlapati R, Park PJ: Comparative analysis of algorithms for identifying amplifications and deletions in array CGH data. Bioinformatics. 2005, 21 (19): 37633770. 10.1093/bioinformatics/bti611.
 12.
Huang T, Wu BL, Lizardi P, Zhao HY: Detection of DNA copy number alterations using penalized least squares regression. Bioinformatics. 2005, 21: 38113817. 10.1093/bioinformatics/bti646.
 13.
Tibshirani R: Regression shrinkage and selection via the Lasso. J Roy Statist Soc Ser B. 1996, 58: 267288.
 14.
Tibshirani R, Saunders M, Rosset S, Zhu J, Knight K: Sparsity and smoothness via the fused lasso. J Roy Statist Soc Ser B. 2005, 67: 91108. 10.1111/j.14679868.2005.00490.x.
 15.
Tibshirani R, Wang P: Spatial smoothing and hot spot detection for CGH data using the Fused Lasso. Biostatistics. 2008, 9: 1829. 10.1093/biostatistics/kxm013.
 16.
Eilers HC, Menezes RX: Quantile smoothing of array CGH data. Bioinformatics. 2005, 21 (7): 11461153. 10.1093/bioinformatics/bti148.
 17.
Li Y, Zhu J: Analysis of array CGH data for cancer studies using fused quantile regression. Bioinformatics. 2007, 23 (18): 24702476. 10.1093/bioinformatics/btm364.
 18.
Zou H: The Adaptive Lasso and Its Oracle Properties. J Amer Stat Assoc. 2006, 101: 14181429. 10.1198/016214506000000735.
 19.
Gao XL, Fang YX: Generalized degrees of freedom in shrinkage LAD estimators. Manuscript. 2009, Oakland University, Rochester, MI
 20.
Schwarz G: Estimating the dimension of a model. Ann Statist. 1978, 6: 461464. 10.1214/aos/1176344136.
 21.
Benjamini Y, Hochberg Y: Controlling the false discovery rate: a practical and powerful approach to multiple testing. J Roy Statist Soc Ser B. 1995, 57: 289300.
 22.
Storey JD: A direct approach to false discovery rates. J Roy Statist Soc Ser B. 2002, 64: 479498. 10.1111/14679868.00346.
 23.
Efron B, Tibshirani R: Empirical bayes methods and false discovery rates for microarrays. Genet Epidem. 2002, 23: 7086. 10.1002/gepi.1124.
 24.
Cox DR, Hinkley DV: Theoretical Statistics. 1974, Chapman and Hall, London
 25.
Nakao K, Mehta KR, Fridlyand J, Moore DH, Jain AN, Lafuente A, Wiencke JW, Terdiman JP, Waldman FM: Highresolution analysis of DNA copy number alterations in colorectal cancer by arraybased comparative genomic hybridization. Carcinogenesis. 2004, 25: 13451357. 10.1093/carcin/bgh134.
 26.
Budinska E, Gelnarova E, Schimek MG: MSMAD: a computationally efficient method for the analysis of noisy array CGH data. Bioinformatics. 2009, 25 (6): 703713. 10.1093/bioinformatics/btp022.
Acknowledgements
XG was supported by a OU faculty research fellowship. JH was supported in part by the grants CA120988 from the National Cancer Institute and DMS 0805670 from the National Science Foundation.
Author information
Affiliations
Corresponding author
Additional information
Authors' contributions
XG and JH conceived of the research and designed the study. XG carried out the computational analysis and wrote the paper. JH helped to improve the computational analysis and manuscript preparation. Both authors read and approved the final manuscript.
Authors’ original submitted files for images
Below are the links to the authors’ original submitted files for images.
Rights and permissions
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Cite this article
Gao, X., Huang, J. A robust penalized method for the analysis of noisy DNA copy number data. BMC Genomics 11, 517 (2010). https://doi.org/10.1186/1471216411517
Received:
Accepted:
Published:
Keywords
 Bacterial Artificial Chromosome
 Copy Number Variation
 Comparative Genomic Hybridization
 Copy Number Change
 Oracle Property