 Research
 Open access
 Published:
An adaptive classification model for peptide identification
BMC Genomics volumeÂ 16, ArticleÂ number:Â S1 (2015)
Abstract
Background
Peptide sequence assignment is the central task in protein identification with MS/MSbased strategies. Although a number of postdatabase search algorithms for filtering target peptide spectrum matches (PSMs) have been developed, the discrepancy among the output PSMs is usually significant, remaining a few disputable PSMs. Current studies show that a number of target PSMs which are close to decoy PSMs can hardly be separated from those decoys by only using the discrimination function.
Results
In this paper, we assign each target PSM a weight showing its possibility of being correct. We employ a SVMbased learning model to search the optimal weight for each target PSM and develop a new score system, CRanker, to rank all target PSMs. Due to the large PSM datasets generated in routine database searches, we use the Cholesky factorization technique for storing a kernel matrix to reduce the memory requirement.
Conclusions
Compared with PeptideProphet and Percolator, CRanker has identified more PSMs under similar false discover rates over different datasets. CRanker has shown consistent performance on different test sets, validated the reasonability the proposed model.
Background
As the protein plays central roles in the interaction processes, identification and quantification of proteins in a variety of samples becomes a fundamental task in proteomics [1]. In the commonly used protein identification process, mass spectrometry (MS)based strategies coupled with sequence database searching routinely generate a large number of peptide spectrum matches (PSMs), however, only a fraction of PSMs with high confidence scores are selected as true PSMs by using statistical and machine learning algorithms [2].
For peptide identification, a number of commercial and noncommercial database search tools [3â€“6] have been developed to rank the PSMs based on scoring functions and report topscored ones as target PSMs. In the early stage, empirical filters [7, 8] were described to validate the target PSMs, in which all above the defined thresholds are accepted as correct and those below the thresholds are assumed to be incorrect. However, the criteria for empirical filters may not be easily defined as scoring metrics used in database search tools, the quality of the mass spectrometry data, and the type of mass spectrometer used in the LC/MS/MS experiments vary.
Recently, machine learning approaches were introduced for improving the accuracy of discrimination between correct and incorrect PSMs based on PSM data models. A widely used algorithm, PeptideProphet [9], employs an unsupervised learning approach to identify correct and incorrect PSMs. In PeptideProphet, posterior probabilities of the PSMs are computed by using the expectation maximization (EM) method based on the assumption that these PSM data are drawn from a mixture distribution of correct and incorrect PSMs. Semisupervised learning approaches exploit decoy data and use them as references for better estimation of discriminant scores. In [10], the PeptideProphet algorithm was extended to incorporate decoy PSMs into a mixture probabilistic model at the estimation step of the EM with a semisupervised learning framework. The restrictive parametric assumptions were removed by using the variable component mixture model and the semiparametric mixture model. Percolator [11] is another advanced postdatabase searching method based on semisupervised learning. The goal of Percolator is to increase the number of correct PSMs reported under the minimal FDR or qvalue. Starting with a small set of trusted correct PSMs and a set of incorrect PSMs from searching a decoy database, Percolator iteratively adjusts the learning model to fit the dataset by ranking highconfidence PSMs higher than decoy peptide matches. The peptide identification can also be solved by a supervised learning approach which first trains a classifier with labels of PSMs already known and then uses the classifier to assign labels to those unknown PSMs [12]. In [13], a fully supervised SVM method is proposed to improve the performance of Percolator. Different with other supervised learning methods using decoy databases, DeNoise [14] labels all target PSMs as "correct", but those lowscoring ones are treated as noises. The performance of a postdatabase search algorithm is usually evaluated by computing FDRs based on searching a targetdecoy protein database [15â€“19].
DeNoise has shown its efficiency on eliminating incorrect target PSMs or noisy PSMs based on weights of the protease attributes. However, parameter selection is a big challenge in DeNoise. Based on the fuzzy SVM learning model, FCRanker [20] needs much fewer parameters and less input from the user than DeNoise does. FCRanker incorporates sample clustering procedure into the SVM classifier to estimate confidence on good target PSMs. Different with the traditional SVM model, in which the weight of training error is equally contributed by each data sample, FCRanker uses a fuzzy classification model to estimate the possibility of each target PSM being correct. The final score of each sample is determined by the combination of the value of discriminant function and fuzzy silhouette index. However, FCRanker does not provide an efficient method for calculating the weight of each PSM.
Similar to [20], we cast peptide identification as a binary classification problem in which "good" PSMs are labeled as "+1" and "bad" PSMs are labeled as "1". In this paper, to overcome the weight problem of FCRanker, we deal with the weight of training error as a variable, and employ the primal SVM technique [21] to reformulate the classification problem as the CRanker classification model. In order to handle large PSM datasets, we use the Cholesky factorization technique to improve memory utilization in model training. A new scoring policy is proposed to rank all PSMs, and users can select those topscored PSMs according to FDRs. The CRanker method has been validated on a number of PSM datasets generated from the SEQUEST database search tool. Compared with benchmark postdatabase search algorithms PeptideProphet and Percolator, CRanker has identified more "good" PSMs at the same false discovery rates (FDRs).
Methods
Peptide identification and classification problem
In sequence database searching, a large number of PSMs are routinely generated, however, only a fraction of them are correct. The task of peptide identification is to choose those correct ones from database search outputs. We formulate it as a binary classification problem, in which "good" PSMs are assigned to class "correct" or "+1" and "bad" PSMs to class "incorrect" or "1". Different with typical classification problems, the target PSMs are not trustworthy, i.e., '+1' labels (corresponding to target PSMs) are not reliable. To overcome this problem, FCRanker introduces weight Î¸_{ i } âˆˆ [0,1] to indicate the reliability of ith PSM, where 1 represents the highest confidence level and 0 the lowest confidence level. In fact, the learning model should rely more on reliable PSMs than untrustworthy ones.
Formally, the classification problem for peptide identification is described as follows. Given a set of l PSMs, denoted by \mathrm{\xce\copyright}={\left\{{x}_{i},{y}_{i}\right\}}_{i=1}^{l}\xe2\u0160\u2020{R}^{q}\xc3\u2014\left\{1,1\right\} (Let \mathrm{\xce\copyright}={\left\{{x}_{i},{y}_{i}\right\}}_{i=1}^{l}\xe2\u0160\u2020{R}^{q}\xc3\u2014\left\{1,1\right\} be a set of l PSMs), where x_{ i } âˆˆ R^{q} represents its ith PSM record with q attributes, and y_{ i } = 1 or âˆ’1 is the corresponding label indicating a target or decoy PSM. Let
SVMbased classifiers have shown its advantages in peptide identification [14, 20]. A typical SVM finds a discriminant function Î¨ by solving
where c_{1} > 0 is a constant, Loss(Î¨(x_{ i }), y_{ i }) is the loss function of (x_{ i }, y_{ i }), and Î¨ is the norm of Î¨ for regularization. In FCRanker, Î¸_{ i }, i = 1, . . . , l are treated as parameters and it is a challenge to determine their values.
In [20], Problem (1) is solved by the linear programming SVM model as follows
where Î± âˆˆ R^{l}, b âˆˆ R^{1}, Î¾ = [Î¾_{1}, . . . , Î¾_{ l }] âˆˆ R^{l}, and r âˆˆ R^{1}. Note that in this model, Î¸_{ i } is a parameter, and it is not trivial to choose a good one.
CRanker method
CRanker classification model
In this section, we cope with weight Î¸_{ i } as a variable and reformulate Problem (1) as CRanker classification model. A new score scheme is developed for identifying correct PSMs based on CRanker solution. Note that all 'âˆ’1' labels (decoy PSMs) are reliable, and hence, Î¸_{ i } = 1, i âˆˆ Î©_{âˆ’}. Moreover, we consider constraint \underset{i\xe2\u02c6\u02c6{n}_{+}}{\xe2\u02c6\u2018}{\mathrm{\xce\xb8}}_{i}\xe2\u2030\yen \stackrel{\xc2\xaf}{\mathrm{\xce\xb8}}, where \stackrel{\xcc\u201e}{\mathrm{\xce\xb8}}>0 is a constant, to identify as many good PSMs as possible. Hence, we solve the following optimization problem:
where c_{1} > 0 is a constant.
Technically, we move \underset{i\xe2\u02c6\u02c6{\mathrm{\xce\copyright}}_{+}}{\xe2\u02c6\u2018}{\mathrm{\xce\xb8}}_{i}\xe2\u2030\yen \stackrel{\xcc\u201e}{\mathrm{\xce\xb8}} to the objective function, and reformulate model (3) as
where c_{2} > 0 is a constant.
By using the primal SVM technique [21], we formulate the CRanker classification model as
where K={\left({K}_{ij}\right)}_{i,j=1}^{l},{K}_{ij}=k\left({x}_{i},{x}_{j}\right),k\left(\xe2\u2039\dots ,\xe2\u2039\dots \right) is a given kernel, K_{ i } denotes the ith column of K. The solution of model (5) defines a discriminant function \mathrm{\xce\xa8}\left(x\right)=\underset{i=1}{\overset{l}{\xe2\u02c6\u2018}}{\mathrm{\xce\xb2}}_{i}k\left({x}_{i},x\right)
Choose parameters c_{1} and c_{2}
Parameters c_{1} and c_{2} play a critical role in determining the value of discrimination function Î¨(x_{i}). We aim at Î¨(x_{i}) â‰¥ 0 if x_{i} is a correct target PSM and Î¨(x_{ i }) < 0 otherwise. Notice that y_{ i } â‰¥ 0 for target PSMs, and y_{ i } < 0 for decoys. We have y_{ i }Î¨(x_{ i }) â‰¥ 0 for both correct target PSMs and decoys. Particularly, for x_{ i } with weight Î¸_{ i }, it contributes degree of confidence âˆ’c_{2}Î¸_{ i } to the value of the objective function in problem (5). Meanwhile, x_{ i } generates an empirical loss c_{1}Î¸_{ i }Î·_{ i } where {\mathrm{\xce\xb7}}_{{}_{i}}=Loss\left({y}_{i},\mathrm{\xce\xa8}\left({x}_{i}\right)\right)=\mathsf{\text{max}}{\left\{0,1{y}_{i}{K}_{i}^{T}\mathrm{\xce\xb2}\right\}}^{p},p\xe2\u2030\yen 1. In order to guarantee that the objective function of problem (5) decreases a certain amount, we enforce the loss Î¸_{ i }(c_{1}Î·_{ i } âˆ’ c_{2}) â‰¤ 0, which holds if and only if 0\xe2\u2030\xa4{\mathrm{\xce\xb7}}_{{}_{i}}\xe2\u2030\xa4\frac{{c}_{2}}{{c}_{1}}. It implies
Hence, if parameters c_{1} and c_{2} satisfy
we have 1{\left(\frac{{c}_{2}}{{c}_{1}}\right)}^{1/p}\xe2\u2030\yen 0, and then y_{ i }Î¨(x_{ i }) â‰¥ 0.
Moreover, if we choose parameters c_{1} and c_{2} such that \frac{{c}_{2}}{{c}_{1}}>1, then there exists a degeneration risk that Î² = 0 and Î¸_{ i } = 1 for all i âˆˆ Î©_{+} (i.e., all target PSMs are identified as correct), in which case we have objective function value l(c_{1} âˆ’ c_{2}) < 0.
Therefore, we always select parameters c_{2} â‰¤ c_{1} in CRanker.
Cholesky factorization for large datasets
For large PSM datasets, the kernel matrix K âˆˆ R^{lÃ—l} is usually not sparse, and thus, it is a big challenge to load whole K in memory once. Usually, the number of sample features is much less than the number of samples, and kernel function k provides a convenient and cheap transformation. We aim to design a lowrank approximation of large kernel matrix K by Cholesky factorization, and request pairwise similarities between PSMs sequentially. Specifically,
where L âˆˆ R^{l,r} , L_{ i,j } = 0 if i <j, L_{1,1} â‰¥ L_{2,2} â‰¥ . . . â‰¥ L_{ r,r } are the square roots of the first largest r eigenvalues of K. The details can be referred to [22].
Calculate the scores of PSMs
Based on CRanker discriminant function Î¨(Â·), we assign PSM (x_{ i } , y_{ i }) a score
A large score value indicates the PSM is more likely to be correct. The PSMs are ordered according to their scores, and a certain number of PSMs are output to satisfy a preselected FDR.
Results and discussion
We evaluated the performance of CRanker by comparing it with PeptideProphet and Percolator based on PSMs generated from the SEQUEST search engine. The CRanker algorithm was implemented with Matlab version R2010b running on a PC with Intel Core i52400 CPU 3.10 GHz Ã— 4 and 8 Gb RAM.
Experimental setup
Shotgun proteomics using multidimensional liquid chromatography coupled with tandem mass spectrometry were performed on all biological samples, including universal proteomics standard set (UPS1), the S. cerevisiae Gcn4 affinitypurified complex (Yeast), S. cerevisiae transcription complexes using the Tal08 minichromosome (Tal08) and Human Peripheral Blood Mononuclear Cells (PBMC). The RAW files generated from the different LC/MS/MS experiments were converted to mzXML format with the program ReadW. The MS/MS spectra were extracted from the mzXML file using the program MzXML2Search and all data was processed using the SEQUEST software. For PeptideProphet, we used the Trans Proteomic Pipeline V.4.0.2 (TPP), and the search outputs were converted to pep.XML format files using the TPP suite. For Percolator, we converted the SEQUEST outputs to a merged file in SQT format [23]. The UPS1 dataset, developed by SigmaAldrich company, contains 48 purified human proteins digested with trypsin. The SEQUEST search results include 17,335 PSMs, among which 8974 PSMs match target peptides and 8361 PSMs match decoy peptides. The Yeast dataset contains 6652 proteins and SEQUEST outputs 14,892 PSMs, among which 6703 PSMs match target peptides and 8189 PSMs match decoy peptides. For Tal08 complexes, the tryptic peptides were analyzed on an LTQOrbitrap XL (ThermoFisher) mass spectrometer using monoiosotopic precursor selection (MiPS). It contains 69560 PSMs, among which 42222 PSMs match target peptides and 27338 PSMs match decoy peptides. PBMCs were analyzed with both LTQOrbitrap XL and LTQOrbitrap Velos. A 6step MuDPIT experiments was performed on a LTQOrbitrap XL using either MiPS (orbitmips) or MiPSoff (orbitnomips). The orbitmips dataset contains 103679 PSMs, including 68334 targets and 35345 decoys, and the orbitnomips dataset contains 117751 PSMs, including 76395 targets and 41356 decoys. For the LTQOrbitrap Velos experiments, 11Step MuDPIT experiments were performed similar to Orbitrap XL experiments with either MiPS (velosmips) or MiPSoff (velosnomips). The velosmips dataset contains 301879 PSMs, including 208765 targets and 93114 decoys, and the velosnomips dataset contains 447350 PSMs, including 307549 targets and 139801 decoys. Samples are digested with trysin. There are three types of tryptic peptides: fulldigested, halfdigested and nonedigested. The detailed PSMs are summarized in Table 1.
Each dataset was divided into a training set and a test set according to 50/50 ratio. For largesized datasets, such as Tal08 and PBMCs, we randomly select 20,000 samples from the training set for model training. This procedure was repeated n times, and let Î¨_{ i }(x), i = 1, . . . , n be the discriminant function for the ith time.
Then, discriminant function
was employed in all experiments. We set as n = 6 in this paper. The PSM is represented by a vector of nine attributes: xcorr, deltacn, sprank, ions, hit mass, enzN, enzC, numProt, deltacnR. The first five attributes inherit from SEQUEST and the last four attributes are defined as

enzN: A boolean variable indicating whether the peptide is preceded by a tryptic site;

enzC: A boolean variable indicating whether the peptide has a tryptic Cterminus;

numProt: The number that the corresponding protein matches other PSMs;

deltacnR: deltacn/xcorr.
Weight 1.0 was assigned for xcorr and deltacn, and 0.5 for all others. In CRanker learning model, we set parameter c_{1} and c_{2} as 1.0, p as 2 and choose the Gaussian kernel with kernel argument Ïƒ = 1.0.
Results
Table 2 shows that the total numbers of PSMs identified by CRanker , Peptide Prophet, and Percolator over all datasets (training and test) at F DR â‰ˆ 0.05. As we can see, CRanker can identify more PSMs the other two algorithms.
Table 3 shows the performance of CRanker on test dataset. The last column of Table 3 indicates the ratios of PSMs identified on test set and whole dataset. As the training data is randomly chosen, 50% ratio is an ideal scenario. On four PBMC datasets, the ratios are very close to 50%, indicating that CRanker classifier learned from training data works for the whole dataset. CRanker has shown very close learning performance on all datasets except UPS1. CRanker slightly overfitted on the test dataset of UPS1 (43.26%) as the training dataset is relatively small.
We have also looked at overlapping PSMs among PeptideProphet, Percolator and CRanker. Figure 1 shows the overlap of the identified target PSMs by the three methods on ups1, yeast, tal08 and 4 PBMC datasets. On all the datasets, the target PSMs output by CRanker have large overlap with PeptideProphet and Percolator. The details are list in Table 4. On ups1, PeptideProphet has 497 (87.8%) target PSMs shared by CRanker; Percolator has 390 (89.0%) target PSMs shared by CRanker. On all the other 6 datasets, these percentages exceed 90%. The results indicate that the majority of PSMs validated by PeptideProphet and Percolator were also validated by CRanker.
We finally compared the performance of CRanker, PeptideProphet, and Percolator by receiver operating characteristic (ROC). Due to the space limit, we included only ROCs on orbitnomips (Figure 2) and velosnomips (Figure 3) datasets. As we can see, CRanker reaches highest true positive rates (TPRs) throughout all false positive rates (FPRs) levels among the three algorithms in both figures.
Stability of CRanker
As training data points are randomly chosen from training datasets, the performance of CRanker classifier may vary slightly. We counted the outputs of CRanker in 20 runs on orbitmips and velosmips datasets.
Let P_{ i } and #P_{ i } be the set of PSMs and the number of PSMs identified by CRanker at ith run, i = 1, . . . , m. We compared the similarity of P_{ i } and P_{ j } , i â‰ j, i, j = 1, . . . , m by
Then the stability of CRanker on a dataset is defined as the mean of all pairwise similarities over m runs:
Table 5 and Table 6 show the numbers of PSMs identified by CRanker in 20 runs on orbitmips and velosmips, respectively. The stability of CRanker is S = 99.17% on orbitmips and S = 99.53% on velosmips.
Conclusion
We have proposed a new scoring system CRanker for peptide identification, in which the confidence on each PSM is taken into account in the model training process. CRanker employs the primal SVM technique and copes with the weight of each PSM as a variable. We use the Cholesky factorization technique to improve memory utilization in model training for large PSM datasets. The performance of CRanker has been compared with benchmark algorithms PeptideProphet and Percolator over a variety of PSM datasets. The experimental studies show CRanker outperforms the other two by identifying more targets at the same FDRs.
Abbreviations
 PSM:

peptide spectrum match
 SVM:

support vector machine
 ROC:

receiver operating characteristic
 FDR:

false discovery rate.
References
Aebersold R, Mann M: Mass spectrometrybased proteomics. Nature. 2003, 422 (6928): 198207.
Nesvizhskii AI: A survey of computational methods and error rate estimation procedures for peptide and protein identification in shotgun proteomics. Journal of proteomics. 2007, 73 (11): 20922123.
Eng JK, McCormack AL, Yates JR: An approach to correlate tandem mass spectral data of peptides with amino acid sequences in a protein database. Journal of the American Society for Mass Spectrometry. 1994, 5 (11): 976989.
Perkins DN, Pappin DJC, Creasy DM, Cottrell JS: Probabilitybased protein identification by searching sequence databases using mass spectrometry data. Electrophoresis. 1999, 20 (18): 35513567.
Geer LY, Markey SP, Kowalak JA, Wagner L, Xu M, Maynard DM, Yang X, Shi W, Bryant SH: Open mass spectrometry search algorithm. J Proteome Res. 2004, 3 (5): 95864.
Craig R, Beavis RC: Tandem: matching proteins with tandem mass spectra. Bioinformatics. 2004, 20 (9): 14661467.
Link A, Eng JJ, Schieltz DM, Carmack E, Mize GJ, Morris DR, Garvik BM, Yates JR: Direct analysis of protein complexes using mass spectrometry. Nature Biotechnology. 1999, 17 (7): 676682.
Washburn MP, Wolters D, Yates JR: Largescale analysis of the yeast proteome by multidimensional protein identification technology. Nature biotechnology. 2001, 19 (3): 2427.
Keller A, Nesvizhskii AI, Kolker E, Aebersold R: Empirical statistical model to estimate the accuracy of peptide identifications made by ms/ms and database search. Analytical chemistry. 2002, 74 (20): 53835392.
Choi H, Nesvizhskii AI: Semisupervised modelbased validation of peptide identifications in mass spectrometrybased proteomics. Journal of proteome research. 2007, 7 (1): 254265.
KÃ¤ll L, Canterbury JD, Weston J, Noble WS, MacCoss MJ: Semisupervised learning for peptide identification from shotgun proteomics datasets. Nature Methods. 2007, 4 (11): 923925.
Anderson D, Li W, Payan DG, Noble WS: A new algorithm for the evaluation of shotgun peptide sequencing in proteomics: support vector machine classification of peptide ms/ms spectra and sequest scores. Journal of proteome research. 2003, 2 (2): 137146.
Spivak M, Weston J, Bottou L, KÃ¤ll L, Noble WS: Improvements to the percolator algorithm for peptide identification from shotgun proteomics data sets. Journal of proteome research. 2009, 8 (7): 37373745.
Jian L, Niu X, Xia Z, Samir P, Sumanasekera C, Mu Z, Jennings JL, Hoek KL, Allos T, Howard LM, Edwards KM, Weil PA, Link AJ: A novel algorithm for validating peptide identification from a shotgun proteomics search engine. J Proteome Res. 2013, 12 (3): 11081119.
Elias JE, Gygi SP: Targetdecoy search strategy for increased confidence in largescale protein identifications by mass spectrometry. Nature methods. 2007, 4 (3): 207214.
Lam H, Deutsch EW, Aebersold R: Artificial decoy spectral libraries for false discovery rate estimation in spectral library searching in proteomics. Journal of proteome research. 2010, 9 (1): 605610.
Choi H, Nesvizhskii AI: False discovery rates and related statistical concepts in mass spectrometrybased proteomics. Journal of Proteome Research. 2008, 7 (1): 4750.
KÃ¤ll L, Storey JD, MacCoss MJ, Noble WS: Assigning significance to peptides identified by tandem mass spectrometry using decoy databases. Journal of Proteome Research. 2008, 7 (1): 2934.
Higgs RE, Knierman MD, Bonner Freeman A, Gelbert LM, Patil ST, Hale JE: Estimating the statistical significance of peptide identifications from shotgun proteomics experiments. Journal of Proteome Research. 2007, 6 (5): 17581767.
Liang X, Xia Z, Niu X, Link A, Pang L, Wu FX, Zhang H: Peptide identification based on fuzzy classification and clustering. Proteome Science. 2013, 11 (1): 19. doi:10.1186/1477595611S1S10
Chapelle O: Training a support vector machine in the primal. Neural Comput. 2007, 19 (5): 11551178. doi:10.1162/neco.2007.19.5.1155
Fine S, Scheinberg K: Efficient svm training using lowrank kernel representations. J Mach Learn Res. 2002, 2: 243264.
Bill N: SQT File Format. [https://noble.gs.washington.edu/proj/crux/sqtformat.html]
Acknowledgements
The proteomics data was generated with support from NIH grant GM064779 and Vanderbilt University School of Medicine IDEAS Program grant 1040669530. AJL and XN was supported by NIH grant GM064779. LJ was partially supported by the Natural Science Foundation of China under Grant 61403419 and 11326203. XL was partially supported by the Fundamental Research Funds for the Central Universities under Grant 15CX02051A and the Natural Science Foundation of Shandong Province under Grant ZR2014AP004.
Declarations
Publication of this article was funded by the Natural Science Foundation of Shandong Province under Grant ZR2014AP004.
This article has been published as part of BMC Genomics Volume 16 Supplement 11, 2015: Selected articles from the Fourth IEEE International Conference on Computational Advances in Bio and medical Sciences (ICCABS 2014): Genomics. The full contents of the supplement are available online at http://www.biomedcentral.com/bmcgenomics/supplements/16/S11.
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors' contributions
XL and ZX designed the CRanker classification model and wrote the manuscript. LJ and XL designed the parameter selection and experiments. XN and AL provided the proteomics data and verified the experimental results. All authors read and approved the final manuscript.
Rights and permissions
Open Access Â This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.
The images or other third party material in this article are included in the articleâ€™s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the articleâ€™s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
To view a copy of this licence, visit https://creativecommons.org/licenses/by/4.0/.
The Creative Commons Public Domain Dedication waiver (https://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.
About this article
Cite this article
Liang, X., Xia, Z., Jian, L. et al. An adaptive classification model for peptide identification. BMC Genomics 16 (Suppl 11), S1 (2015). https://doi.org/10.1186/1471216416S11S1
Published:
DOI: https://doi.org/10.1186/1471216416S11S1