# A robust penalized method for the analysis of noisy DNA copy number data

- Xiaoli Gao
^{1}Email author and - Jian Huang
^{2, 3}

**11**:517

**DOI: **10.1186/1471-2164-11-517

© Gao and Huang; licensee BioMed Central Ltd. 2010

**Received: **17 November 2009

**Accepted: **25 September 2010

**Published: **25 September 2010

## Abstract

### Background

Deletions and amplifications of the human genomic DNA copy number are the causes of numerous diseases, such as, various forms of cancer. Therefore, the detection of DNA copy number variations (CNV) is important in understanding the genetic basis of many diseases. Various techniques and platforms have been developed for genome-wide analysis of DNA copy number, such as, array-based comparative genomic hybridization (aCGH) and high-resolution mapping with high-density tiling oligonucleotide arrays. Since complicated biological and experimental processes are often associated with these platforms, data can be potentially contaminated by outliers.

### Results

We propose a penalized LAD regression model with the adaptive fused lasso penalty for detecting CNV. This method contains robust properties and incorporates both the spatial dependence and sparsity of CNV into the analysis. Our simulation studies and real data analysis indicate that the proposed method can correctly detect the numbers and locations of the true breakpoints while appropriately controlling the false positives.

### Conclusions

The proposed method has three advantages for detecting CNV change points: it contains robustness properties; incorporates both spatial dependence and sparsity; and estimates the true values at each marker accurately.

## Background

Deletions and amplifications of the human genomic DNA copy number are the causes of numerous diseases. They are also related to phenotypic variation in the normal population. Therefore, the detection of DNA copy number variation (CNV) is important in understanding the genetic basis of disease, such as, various types of cancer. Several techniques and platforms have been developed for genome-wide analysis of DNA copy number, including comparative genomic hybridization (CGH), array-based comparative genomic hybridization (aCGH), single nucleotide polymorphism (SNP) arrays and high-resolution mapping using high-density tiling oligonucleotide arrays (HR-CGH) [1–5]. These platforms have been used with microarrays. Each microarray consists of tens of thousands of genomic targets or probes, sometimes referred to as markers, which are spotted or printed on a glass surface. During aCGH analysis, a DNA sample of interest (test sample), and a reference sample are differentially labelled with dyes, typically Cy3 and Cy5, and mixed. The combined sample is then hybridized to the microarray and imaged, which results in the test and reference intensities for all the markers. The goal of the analysis of DNA copy number data is to partition the whole genome into segments where copy numbers change between contiguous segments, and subsequently to quantify the copy number in each segment. Therefore, identifying the locations of copy number changes is a key step in the analysis of DNA copy number data.

Several methods have been proposed to identify the breakpoints of copy number changes. A genetic local search algorithm was developed to localize the breakpoints along the chromosome [6]. A binary segmentation procedure (CBS) was proposed to look for two breakpoints at a time by considering the segment as a circle [7]. An unsupervised hidden markov model (HMM) approach was used to classify each chromosome into different states representing different copy numbers [8]. A hierarchical clustering algorithm was studied to select interesting clusters by controlling the false discovery rate (FDR) [9]. A wavelets approach for denoising the data was used to uncover the true copy number changes [10]. The performances of these methods were carefully compared [11].

Recently, several penalized regression methods have been proposed for detecting change points. In the framework of penalized regression, a least squares (LS) regression model was used with the least absolute penalty on the differences between the relative copy numbers of the neighboring markers [12]. This model was called the Lasso based (LB) model since it can be recast into LS regression with the Lasso penalty [13]. The LB model imposes some smoothness properties on the relative copy numbers along the chromosome. However, it does not take into account the sparsity in the copy number variations. Here the smoothness means that the nearby markers tend to have the same intensities and there is only a few markers where changes occur; the sparsity means that only a small number of markers have some nonzero intensities. A penalized LS regression with fused lasso penalty (LS-FL) was proposed to detect "hot spot" in a CGH data [14, 15]. This method is applied to incorporate both sparsity and smoothness properties of the data. It is well-known that the solutions based on LS framework can be easily distorted by a single outlier. Both LB and LS-FL methods lack robust properties when the data does not have a nice distribution. Considering the possible data contamination in a microarray experiment, quantile regression with Lasso (Quantile LB) method was studied for the noisy array CGH data [16, 17]. However, when the data is sparse, the Quantile LB method does not incorporate the sparsity property of the data sets and then tends to identify change points false positively.

In this manuscript, we propose a penalized LAD regression with the adaptive fused lasso penalty to analyze the noisy data sets. We name this method as the LAD-aFL. The proposed LAD-aFL method has three advantages in detecting CNV change points. First, it is expected to be resistant to outliers by using the LAD loss function. Second, the adaptive fused lasso penalty can incorporate both spatial dependence and sparsity properties of CNV data sets into the analysis. Third, the adaptive procedure is expected to significantly improve the estimates of the true intensity at each marker.

## Methods

### LAD-aFL model for CNV analysis

*y*

_{ i }be the log2 ratio of the intensity of the red over green channels at marker

*i*on a chromosome, where the red and green channels measure the intensities of the test (e.g. cancer) and reference (e.g. normal) samples. We assume that those intensities have been properly normalized. Let

*β*

_{ i }be the true relative copy number and

*u*

_{ i }(=

*β*

_{ i }

*- β*

_{i- 1}) be the true jump value at marker

*i*respectively. For the notation's convenience, we denote

*β*

_{0}= 0 and thus

*u*

_{1}=

*β*

_{1}. The observed

*y*

_{ i }can be considered to be a realization of

*β*

_{ i }at marker

*i*with a random noise,

*n*is the number of markers on a given chromosome. Our task is to make inference about

*β*

_{ i }'s based on the observed

*y*

_{ i }'s. There are three possible factors in model (1). First, there may be outliers in the observed data, so a robust procedure is needed. Second, the real

*β*

_{ i }'s have the spatial dependence because the true relative copy numbers of the nearby markers are the same except in the regions where the relative copy numbers change abruptly. Third, copy number changes only occur at a few locations in the chromosome; most of the

*β*

_{ i }'s should be zero. Based on those three factors, we propose the criterion

Here, λ_{1} and λ_{2} are two tuning parameters controlling the sparsity and smoothness of the estimates, ${a}_{i}(=1/|{\hat{\beta}}_{i}^{(0)}|)$ and ${b}_{i}(=1/|{\hat{u}}_{i}^{(0)}|)$ are the weights of two penalties from any consistent initial estimates ${\hat{\beta}}_{i}^{(0)}$ and ${\hat{u}}_{i}^{(0)}$. A LAD-aFL estimator of β(=(β_{1}, ⋯, β_{n})') is the value $\hat{\beta}$ that minimizes (2). In this criterion, we use the absolute loss to reduce the influence of outliers; we use the adaptive fused Lasso penalty, an adaptive version of the fused Lasso penalty, to measure both sparsity and smoothness properties of *β*_{
i
} 's in a CGH data set. By penalizing the term ${\sum}_{i=1}^{n}{a}_{i}}\left|{\beta}_{i}\right|$ in (2), the sparse solution ${\hat{\beta}}_{i}$'s is expected to have some oracle properties under some conditions [18]. One can understand the oracle properties in the way that the estimates of true nonzero β_{i}'s in the full model are as well as if the true zero β_{i}'s are given in advance. If we rewrite (2) as a regression problem of *u*_{
i
} 's, then the term ${\sum}_{i=2}^{n}{b}_{i}}|{\beta}_{i}-{\beta}_{i-1}|(={\displaystyle {\sum}_{i=2}^{n}{b}_{i}}|{u}_{i}|)$ provides a measurement of the sparsity of the parameters *u*_{
i
} 's, which reflects the spatial dependence of the true *β*_{
i
} 's. By penalizing this term, the sparse solution ${\hat{u}}_{i}$'s are expected to have some oracle properties under some conditions.

In our study, we set the initial values of ${\hat{\beta}}^{(0)}$ to be a regular LAD estimator. In other words, ${\hat{\beta}}_{i}^{(0)}$ = *y*_{
i
} for *i* = 1, 2, ⋯, *n* and ${\hat{u}}_{i}^{(0)}$ = *y*_{
i
}- *y*_{i- 1}for *i* = 2, ⋯, *n*.

### Computation

**y**= (

*y*

_{1}, ⋯,

*y*

_{ n })' and a

*n*×

*n*diagonal matrix ${U}_{{\lambda}_{1}}$ = diag (

*a*

_{1}λ

_{1}/2,

*a*

_{2}λ

_{1}, ⋯,

*a*

_{ n }λ

_{1}). Define a

*n*×

*n*matrix ${V}_{{\lambda}_{1}}{,}_{{\lambda}_{2}}$ as

**y*** = (

**y**',

**0**',

**0**')' and a new design matrix ${X}^{*}=[I,{{U}^{\prime}}_{{\lambda}_{1}},{{V}^{\prime}}_{{\lambda}_{1},{\lambda}_{2}}{]}^{\prime}$, we re-write (2) as

For every fixed λ_{1} and λ_{2}, (3) is the objective function of a LAD regression problem with a new sparse design matrix **X***. Therefore, an existing program such as the R quantreg package can be used to compute $\hat{\beta}$.

### Determining the tuning parameters

The magnitude of tuning parameters λ_{1} and λ_{2} determine the smoothness and sparsity of the estimates ${\hat{\beta}}_{i}$'s. In one extreme, if λ_{1} = 0 and λ_{2} = 0, then the estimate of *β*_{
i
}is simply *y*_{
i
}, which obviously leads to too many estimated non-zero relative ratios. In the other extreme, if λ_{1} is very large, then all ${\hat{\beta}}_{i}$'s are forced to be zero regardless of the data, which is not reasonable.

_{1}and λ

_{2}, we obtain a LAD-aFL solution, ${\hat{\beta}}_{i}$'s, and the complexity of the model, $\widehat{df}$. Let ${A}_{1}=\{1\le i\le n;{\hat{\beta}}_{i}=0\}$, ${A}_{2}=\{1\le i\le n;{\hat{\beta}}_{i}={\hat{\beta}}_{i}{}_{-1},\mathrm{max}\{\left|{\hat{\beta}}_{i}\right|,\left|{\hat{\beta}}_{i-1}\right|\}>0\}$. If we assume that the cardinalities of

*A*

_{1}and

*A*

_{2}are

*m*

_{1}and

*m*

_{2}separately, then $\widehat{df}$ =

*n*-

*m*

_{1}-

*m*

_{2}[19]. Our analysis shows that the Schwarz information criterion (SIC) works relatively conservative for analyzing the CGH data because of the small number of changes in a data set [20]. We modify SIC as

*q*≥ 1 is a user-defined SIC factor. Larger

*q*tends to choose a more parsimonious model. We search the tuning parameters λ

_{1}and λ

_{2}using the following two steps.

- 1.
Let

*q*=*q*_{1}with*q*_{1}≥ 1. For a fixed small value of λ_{1}, say λ_{1}= 0.001, we search the "best" λ_{2}from a uniform grid to minimize SIC. - 2.
Let

*q*=*q*_{2}with*q*_{2}≥ 1. For the above "best" λ_{2}, we increase λ_{1}by a small increment from a uniform grid and search a "best" one to minimize SIC.

Here λ_{2} controls the frequency of alteration region, and λ_{1} controls the number of nonzero log2 ratios. Noticing that there are much less number of alterations than the number of nonzero log2 ratios in a CGH array data set, we can select λ_{2} more aggressively by choosing *q*_{1} = 1.5 and *q*_{2} = 1 in our computation.

Even though many cancer profiles contain large size of aberrations, which do not have the sparsity in their relative intensities data sets, the existence of the sparsity of the jumps (only a few jumps exists for the relative intensities) still favors the penalized method. To reflect the true relative intensities accurately, we can choose a small λ_{1}, say, λ_{1} = 0.001. Our simulations show that LAD-aFL is significantly efficient in mapping these true segments.

### Estimation of FDR

Let ${\hat{\beta}}_{i}$ be the LAD-aFL estimate using the above SIC strategy and ${\hat{\mu}}_{i}(={\hat{\beta}}_{i}-{\hat{\beta}}_{i-1})$ be the estimated jump at marker *i*. The set {1 ≤ *i* ≤ *n* : ${\hat{u}}_{i}$ ≠ 0} includes all the potential breakpoints. However, some of the nonzero estimated jumps may not be significant and can lead to false positives. We often treat the question of whether there is a significant copy number change at a position as a hypothesis testing problem [12, 15]. The null hypothesis is that the marker *i* does not belong to any gain/loss region. When all the positions are investigated simultaneously, it becomes a multiple testing problem. In this multiple testing problem, FDR is defined as the expectation of the proportion of false positive results, which can be estimated by the number of markers picked under null hypothesis divided by the number of markers picked in the observed data [21–23].

*K*segments,

*S*

_{1},

*S*

_{2}, ⋯,

*S*

_{ K }. The

*k*th segment

*S*

_{ k }, 1 ≤

*k*≤

*K*, includes

*n*

_{ k }markers and has sample median ${\tilde{y}}_{k}$. The hypothesis of interest is

*k*th segment and $\hat{f}(0)$ is an estimate of the ordinary of error distribution at 0 in model (1). Using Cox and Hinkley's approach, we have$\hat{f}(0)=(t-s)/[n({\hat{e}}_{(t)}-{\hat{e}}_{(s)})]$, where ${\hat{e}}_{(i)}$'s are ordered sample residuals and

*t*and

*s*are symmetric about the index of the median sample residuals. Thus ${\hat{z}}_{k}$ is approximated to be a standard normal distribution under ${H}_{0}^{k}$[24]. A conservative estimator of FDR for a given cutoff value

*p*ϵ (0, 1) is,

where ${p}_{k}=P(N(0,1)>|{\hat{z}}_{k}|)$. In our study, we choose *p* = 0.002 without other specification.

### Detection the breakpoints

- S1.
First we use the SIC to compute ${\hat{\beta}}_{i}$'s and ${\hat{u}}_{i}$'s. All markers where both ${\hat{u}}_{i}$ ≠ 0 and ${\hat{\beta}}_{i}$ >

*b*_{0}are identified as the candidates of breakpoints, where*b*_{0}is an empirical cutoff threshold for possible amplifications and deletions. Some work suggested that the possible chromosome amplifications and deletions should satisfy log2-ratio> 0.225, which is corresponding to values between 2 and 3 standard deviations from the mean [25]. We choose*b*_{0}= 0.1 conservatively in our experiment. - S2.
For the potential breakpoints in S1, we calculate p-values and estimate FDR. The significant breakpoints are identified by controlling FDR.

## Results and Discussion

### Simulation studies

We evaluate the performance of the LAD-aFL method for detecting CNV using three simulation examples. In the first two examples, we consider 500 markers equally spaced along a chromosome.

where *β*_{0i}'s are the true log2 ratios of all 500 markers which have three altered regions corresponding to quadraploid, triploid and monoploid states. Similar to [12], we generate random noises *ε*_{
i
} 's from AR(2), AR(1) and independent models, respectively.

**Example 1**.

*To demonstrate the performance of the LAD-aFL method under both sparsity and smoothness conditions, we set the true log2 ratios β*

_{0i}

*'s in (4) to be significantly sparse as in Table*1

*. We generate ε*

_{ i }

*'s from the following three models such that they have the same standard deviations*.

The true log2 ratios for Examples 1 and 2

i | 1-100 | 101-110 | 111-450 | 451-460 | 461-980 | 981-1000 |
---|---|---|---|---|---|---|

| 0 | 1 | 0 | 0.59 | 0 | -1 |

*Independent: ε*_{
i
}= *e*_{i 0},

*AR* (*1*): *ε*_{
i
} = 0.60*ε*_{i- 1}+ *e*_{i 1},

*AR* (*2*): *ε*_{
i
} = 0.60*ε*_{i- 1}+ 0.20*ε*_{i- 2}+ *e*_{i 2},

*where e*_{i 0}~ *N*(0, 0.065^{2}), *e*_{i 1}~ *N*(0, 0.082^{2}), *and e*_{i 2}~ *N*(0,0.1^{2}) *for i* = 1, ⋯, 500.

**Example 2**. *In this example, we use the same β*_{0i}*'s as in Example 1. However, to evaluate the robust-ness property of the LAD-aFL estimator, we simulate e*_{
ij
}*'s from double exponential (DE) distributions such that ε*_{
i
}*'s have equal standard deviation* 0.1.

*Independent: ε*_{
i
} = *e*_{i 0},

*AR* (*1*): *ε*_{
i
} = 0.60*ε*_{i- 1}+ *e*_{i 1,}

*AR* (*2*): *ε*_{
i
}= 0.60*ε*_{i- 1}+ .20*ε*_{i- 2}+ *e*_{i 2},

where *e*_{i 0}~ *DE*(0, 0.0707), *e*_{i 1}~ *DE*(0, 0.0566) *and e*_{i 2}~ *DE*(0, 0.0460) for *i* = 1, ⋯, 500.

Simulation results for Examples 1 and 2

Methods | AR(2) | AR(1) | Ind. | |
---|---|---|---|---|

Example 1 | LAD-aFL | 5.225 (0.831) | 5.375 (0.806) | 4.750 (0.669) |

4.925 | 4.975, 0.400 | 4.750, 0 | ||

LS-FL | 4.250 (1.149) | 4.750 (0.707) | 4.550 (0.959) | |

4.250, 0 | 4.725, 0.025 | 4.525, 0.025 | ||

LAD-FL | 5.025 (0.479) | 4.975 (0.806) | 4.350 (1.167) | |

4.850, 0.175 | 4.900, 0.075 | 4.275, 0.075 | ||

Example 2 | LAD-aFL | 5.275 (0.598) | 5.475 (0.784) | 4.925 (0.350) |

5.000, 0.275 | 4.925, 0.550 | 4.900, 0.025 | ||

LS-FL | 2.850 (1.189) | 3.750 (1.171) | 3.125 (1.362) | |

2.850, 0 | 3.750, 0 | 3.125, 0 | ||

LAD-FL | 4.850 (0.533) | 4.800 (0.791) | 4.575 (0.874) | |

4.850, 0 | 4.575, 0.225 | 4.450, 0.125 |

Our simulation results show that the LAD-aFL method can detect the copy number variations with significant accuracy. Compared to the LS-FL method, LAD-aFL is more stable and robust, even if the simulated data is generated from an independent model. The LS-FL method tends to over-smooth the data set and does not have the robust property. To contain some robust properties, the Loess technique was imposed [15]. Our simulation results show that the LS-FL method with the Loess technique is unstable and may miss many significant breakpoints when the data is significantly sparse. For example, for AR(2) model in Example 2, out of 5 true breakpoints, LAD-aFL detect 5.275 breakpoints on average with standard deviation 0.598, while LS-FL only detect 2.850 breakpoints on average with standard deviation 1.189.

In Table 2, we also provide the simulation results from the LAD-FL method. The LAD-FL method is comparable to the LS-FL with Loess in Example 1 and competent to the LS-FL with Loess in Example 2; it can be explained by the natural robust property of the LAD part. Furthermore, due to the adaptive procedure, the LAD-aFL is more accurate than the LAD-FL in detecting the significant breakpoints in both examples.

In the following Example 3, we apply LAD-aFL to large size aberrations with 10,000 markers equally spaced along a chromosome.

*We simulate e*

_{ ij }

*'s from AR(1) model in Example 2. We consider three cases of large aberrations containing 99.8%, 80% and 50% of the probes, respectively, in each profile*.

Simulation results for Examples 3

Methods | Case 1 | Case 2 | Case 3 |
---|---|---|---|

True Number | 2.000 | 1.000 | 1.000 |

LAD-aFL | 1.900 (0.410) | 1.000 (0) | 1.000 (0) |

1.900 | 1.000, 0 | 1.000, 0 | |

LS-FL | 1.750 (0.444) | 1.150 (0.366) | 1.000 (0) |

1.750, 0 | 0.850, 0.300 | 1.000, 0 |

We investigate the estimate of FDR in using above examples. For example, if we control FDR rate at level 0.002, out of 100 iterations of model AR(1) in Example 2 and Case I in Example 3, 90% and 95% of the them have true FDR less than 0.002, respectively.

### Bacterial Artificial Chromosome (BAC) array

The BAC data set consists of single experiments on 15 fibroblast cell lines [25]. Each array contains measurements for 2276 mapped BACs spotted in triplicates. There are either one or two alterations in each cell line as identified by spectral karyotyping with 15 partial and 8 whole chromosomal alterations. The variable used for analysis is the normalized average of the log2 ratio of test sample over reference sample.

### Colorectal cancer data

### Human chromosome 22q11 data

High-resolution CGH (HR-CGH) technology was applied to analyze CNVs on chromosome 22q11 [5]. The DNA samples were collected from patients who have Cat-Eye syndrome, 22q11 deletion syndrome (also called velocardiofacial syndrome or DiGeorge syndrome) and some other symptoms. A large proportion of 22q11DS patients develop learning disabilities and attention-deficit hyperactivity disorder with large variations in the symptoms of the disease. For example, patients 03-154 and 97-237 had the typical LCR *A* → *D* deletion, but they exhibited considerable variation in their symptoms, which might be linked to the deletion size. Therefore, it warrants development of a method which can accurately detect those sizes of deletion regions.

*p*as 0.0001 since the data set is significantly large and sparse. The LAD-aFL method identified all the blocks previously detected. It also detected the breakpoints for DNA block deletion and amplification. Figure 6 gives the results of the data from patients 03-154 and 97-237. This plot indicates the different deletion sizes in the two patients. In addition, Patient 03-154 appears to have other deleted regions which was not previously detected [5].

## Conclusions

We propose to use a smoothing technique, LAD-aFL to detect the breakpoints, and then divide all the probes into different segments for a noisy CGH data. Very recently, a median smoothing median absolute deviation method (MSMAD) was proposed to improve the performance of breakpoints detection [26]. One can incorporate the LAD-aFL smoother easily into the median absolute deviation process.

The appealing features of the proposed LAD-aFL method include its resistance against outliers, its improved accuracy in mapping the true intensities and the fast and accurate computation algorithm. The robustness property is inherited from LAD regression, which significantly reduces the possibility of false positives due to outlying intensity measurements. These properties are demonstrated in the generating models used in our simulation studies. The adaptive fused Lasso penalty in the LAD-aFL method incorporates both sparsity and smoothness properties of the copy number data. The adaptive procedure generates the solutions with some oracle properties. Computationally, the LAD-aFL estimator can be computed by transform to a unpenalized LAD regression, since both the loss and penalty functions use the same *l*_{1} norm. Our simulation and real data analysis indicate that the LAD-aFL method is a useful and robust approach for CNV analysis. However, there are some important questions which requires further investigation. For example, in the proposed LAD-aFL method, it is assumed that the reported intensity data is properly normalized. It would be useful to examine the sensitivity of the method for different normalization procedures, or perhaps consider the possibility of incorporating normalization into an integrated model. Furthermore, regarding the theoretical properties of LAD-aFL, it would be of interest to consider under what conditions of the smoothness and sparsity of the underlying copy number the LAD-aFL is able to correctly detect the breakpoints with high probability.

## Declarations

### Acknowledgements

XG was supported by a OU faculty research fellowship. JH was supported in part by the grants CA120988 from the National Cancer Institute and DMS 0805670 from the National Science Foundation.

## Authors’ Affiliations

## References

- Kallioniemi A, Kallioniemi OP, Sudar D, Rutovitz D, Gray JW, Waldman F, Pinkel D: Comparative genomic hybridization for molecular cytogenetic analysis of solid tumors. Science. 1992, 258: 818-821. 10.1126/science.1359641.PubMedView Article
- Pinkel D, Segraves R, Sudar D, Clark S, Poole I, Kowbel D, Collins C, Kuo WL, Chen C, Zhai Y, Dairkee SH, Ljung BM, Gray JW, Albertson DG: High resolution analysis of DNA copy number variation using comparative genomic hybridization to microarrays. Nat Genet. 1998, 20: 207-211. 10.1038/2524.PubMedView Article
- Snijders AM, Nowak N, Segraves R, Blackwood S, Brown N, Conroy J, Hamilton G, Hindle AK, Huey B, Kimura K, Law S, Myambo K, Palmer J, Ylstra B, Yue JP, Gray JW, Jain AN, Pinkel D, Alberston DG: Assembly of microarrays for genome-wide measurement of DNA copy number. Nat Genet. 2001, 29: 263-264. 10.1038/ng754.PubMedView Article
- Zhao XJ, Li C, Paez JG, Chin K, Jänne PA, Chen TH, Girard L, Minna J, Christiani D, Leo C, Gray JW, Sellers WR, Meyerson M: An integrated view of copy number and allelic alterations in the cancer genome using single nucleotide polymorphism arrays. Cancer Res. 2004, 64: 3060-3071. 10.1158/0008-5472.CAN-03-3308.PubMedView Article
- Urban AE, Korbel JO, Selzer R, Richmond T, Hacker A, Popescu GV, Cubells JF, Green R, Emanuel BS, Gerstein MB, Weissman SM, Snyder M: High-resolution mapping of DNA copy alterations in human chromosome 22 using high-density tiling oligonucleotide arrays. PNAS. 2006, 103: 4534-4539. 10.1073/pnas.0511340103.PubMed CentralPubMedView Article
- Jong K, Marchiori E, van der Vaart A, Ylstra B, Weiss M, Meijer G: Chromosomal breakpoint detection in human cancer. Applications of Evolutionary Computing. EvoBIO: Evolutionary Computation and Bioinformatics. 2003, Springer LNCS, 2611: 107-116.
- Olshen AB, Venkatraman ES, Lucito R, Wigler M: Circular binary segmentation for the analysis of array-based DNA copy number data. Biostatistics. 2004, 5: 557-572. 10.1093/biostatistics/kxh008.PubMedView Article
- Fridlyand J, Snijders AM, Pinkel D, Albertson DG, Jain AN: Hidden Markov models approach to the analysis of the array CGH data. J Multiv Anal. 2002, 90: 132-153. 10.1016/j.jmva.2004.02.008.View Article
- Wang P, Kim Y, Pollack J, Narasimhan B, Tibshirani R: A method for calling gains and losses in array CGH data. Biostatistics. 2005, 6: 45-58. 10.1093/biostatistics/kxh017.PubMedView Article
- Hsu L, Self SG, Grove D, Randolph T, Wang K, Delrow JJ, Loo L, Porter P: Denoising array based comparative genomic hybridization data using wavelets. Biostatistics. 2005, 6: 211-226. 10.1093/biostatistics/kxi004.PubMedView Article
- Lai WR, Johnson MD, Kucherlapati R, Park PJ: Comparative analysis of algorithms for identifying amplifications and deletions in array CGH data. Bioinformatics. 2005, 21 (19): 3763-3770. 10.1093/bioinformatics/bti611.PubMed CentralPubMedView Article
- Huang T, Wu BL, Lizardi P, Zhao HY: Detection of DNA copy number alterations using penalized least squares regression. Bioinformatics. 2005, 21: 3811-3817. 10.1093/bioinformatics/bti646.PubMedView Article
- Tibshirani R: Regression shrinkage and selection via the Lasso. J Roy Statist Soc Ser B. 1996, 58: 267-288.
- Tibshirani R, Saunders M, Rosset S, Zhu J, Knight K: Sparsity and smoothness via the fused lasso. J Roy Statist Soc Ser B. 2005, 67: 91-108. 10.1111/j.1467-9868.2005.00490.x.View Article
- Tibshirani R, Wang P: Spatial smoothing and hot spot detection for CGH data using the Fused Lasso. Biostatistics. 2008, 9: 18-29. 10.1093/biostatistics/kxm013.PubMedView Article
- Eilers HC, Menezes RX: Quantile smoothing of array CGH data. Bioinformatics. 2005, 21 (7): 1146-1153. 10.1093/bioinformatics/bti148.PubMedView Article
- Li Y, Zhu J: Analysis of array CGH data for cancer studies using fused quantile regression. Bioinformatics. 2007, 23 (18): 2470-2476. 10.1093/bioinformatics/btm364.PubMedView Article
- Zou H: The Adaptive Lasso and Its Oracle Properties. J Amer Stat Assoc. 2006, 101: 1418-1429. 10.1198/016214506000000735.View Article
- Gao XL, Fang YX: Generalized degrees of freedom in shrinkage LAD estimators. Manuscript. 2009, Oakland University, Rochester, MI
- Schwarz G: Estimating the dimension of a model. Ann Statist. 1978, 6: 461-464. 10.1214/aos/1176344136.View Article
- Benjamini Y, Hochberg Y: Controlling the false discovery rate: a practical and powerful approach to multiple testing. J Roy Statist Soc Ser B. 1995, 57: 289-300.
- Storey JD: A direct approach to false discovery rates. J Roy Statist Soc Ser B. 2002, 64: 479-498. 10.1111/1467-9868.00346.View Article
- Efron B, Tibshirani R: Empirical bayes methods and false discovery rates for microarrays. Genet Epidem. 2002, 23: 70-86. 10.1002/gepi.1124.View Article
- Cox DR, Hinkley DV: Theoretical Statistics. 1974, Chapman and Hall, LondonView Article
- Nakao K, Mehta KR, Fridlyand J, Moore DH, Jain AN, Lafuente A, Wiencke JW, Terdiman JP, Waldman FM: High-resolution analysis of DNA copy number alterations in colorectal cancer by array-based comparative genomic hybridization. Carcinogenesis. 2004, 25: 1345-1357. 10.1093/carcin/bgh134.PubMedView Article
- Budinska E, Gelnarova E, Schimek MG: MSMAD: a computationally efficient method for the analysis of noisy array CGH data. Bioinformatics. 2009, 25 (6): 703-713. 10.1093/bioinformatics/btp022.PubMedView Article

## Copyright

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.