An adaptive classification model for peptide identification

Background Peptide sequence assignment is the central task in protein identification with MS/MS-based strategies. Although a number of post-database search algorithms for filtering target peptide spectrum matches (PSMs) have been developed, the discrepancy among the output PSMs is usually significant, remaining a few disputable PSMs. Current studies show that a number of target PSMs which are close to decoy PSMs can hardly be separated from those decoys by only using the discrimination function. Results In this paper, we assign each target PSM a weight showing its possibility of being correct. We employ a SVM-based learning model to search the optimal weight for each target PSM and develop a new score system, CRanker, to rank all target PSMs. Due to the large PSM datasets generated in routine database searches, we use the Cholesky factorization technique for storing a kernel matrix to reduce the memory requirement. Conclusions Compared with PeptideProphet and Percolator, CRanker has identified more PSMs under similar false discover rates over different datasets. CRanker has shown consistent performance on different test sets, validated the reasonability the proposed model.


Background
As the protein plays central roles in the interaction processes, identification and quantification of proteins in a variety of samples becomes a fundamental task in proteomics [1]. In the commonly used protein identification process, mass spectrometry (MS)-based strategies coupled with sequence database searching routinely generate a large number of peptide spectrum matches (PSMs), however, only a fraction of PSMs with high confidence scores are selected as true PSMs by using statistical and machine learning algorithms [2].
For peptide identification, a number of commercial and non-commercial database search tools [3][4][5][6] have been developed to rank the PSMs based on scoring functions and report top-scored ones as target PSMs. In the early stage, empirical filters [7,8] were described to validate the target PSMs, in which all above the defined thresholds are accepted as correct and those below the thresholds are assumed to be incorrect. However, the criteria for empirical filters may not be easily defined as scoring metrics used in database search tools, the quality of the mass spectrometry data, and the type of mass spectrometer used in the LC/MS/MS experiments vary.
Recently, machine learning approaches were introduced for improving the accuracy of discrimination between correct and incorrect PSMs based on PSM data models. A widely used algorithm, PeptideProphet [9], employs an unsupervised learning approach to identify correct and incorrect PSMs. In PeptideProphet, posterior probabilities of the PSMs are computed by using the expectation maximization (EM) method based on the assumption that these PSM data are drawn from a mixture distribution of correct and incorrect PSMs. Semisupervised learning approaches exploit decoy data and use them as references for better estimation of discriminant scores. In [10], the PeptideProphet algorithm was extended to incorporate decoy PSMs into a mixture probabilistic model at the estimation step of the EM with a semi-supervised learning framework. The restrictive parametric assumptions were removed by using the variable component mixture model and the semi-parametric mixture model. Percolator [11] is another advanced postdatabase searching method based on semi-supervised learning. The goal of Percolator is to increase the number of correct PSMs reported under the minimal FDR or q-value. Starting with a small set of trusted correct PSMs and a set of incorrect PSMs from searching a decoy database, Percolator iteratively adjusts the learning model to fit the dataset by ranking high-confidence PSMs higher than decoy peptide matches. The peptide identification can also be solved by a supervised learning approach which first trains a classifier with labels of PSMs already known and then uses the classifier to assign labels to those unknown PSMs [12]. In [13], a fully supervised SVM method is proposed to improve the performance of Percolator. Different with other supervised learning methods using decoy databases, De-Noise [14] labels all target PSMs as "correct", but those low-scoring ones are treated as noises. The performance of a post-database search algorithm is usually evaluated by computing FDRs based on searching a targetdecoy protein database [15][16][17][18][19].
De-Noise has shown its efficiency on eliminating incorrect target PSMs or noisy PSMs based on weights of the protease attributes. However, parameter selection is a big challenge in De-Noise. Based on the fuzzy SVM learning model, FC-Ranker [20] needs much fewer parameters and less input from the user than De-Noise does. FC-Ranker incorporates sample clustering procedure into the SVM classifier to estimate confidence on good target PSMs. Different with the traditional SVM model, in which the weight of training error is equally contributed by each data sample, FC-Ranker uses a fuzzy classification model to estimate the possibility of each target PSM being correct. The final score of each sample is determined by the combination of the value of discriminant function and fuzzy silhouette index. However, FC-Ranker does not provide an efficient method for calculating the weight of each PSM.
Similar to [20], we cast peptide identification as a binary classification problem in which "good" PSMs are labeled as "+1" and "bad" PSMs are labeled as "-1". In this paper, to overcome the weight problem of FC-Ranker, we deal with the weight of training error as a variable, and employ the primal SVM technique [21] to re-formulate the classification problem as the CRanker classification model. In order to handle large PSM datasets, we use the Cholesky factorization technique to improve memory utilization in model training. A new scoring policy is proposed to rank all PSMs, and users can select those top-scored PSMs according to FDRs.
The CRanker method has been validated on a number of PSM datasets generated from the SEQUEST database search tool. Compared with benchmark post-database search algorithms PeptideProphet and Percolator, CRanker has identified more "good" PSMs at the same false discovery rates (FDRs).

Peptide identification and classification problem
In sequence database searching, a large number of PSMs are routinely generated, however, only a fraction of them are correct. The task of peptide identification is to choose those correct ones from database search outputs. We formulate it as a binary classification problem, in which "good" PSMs are assigned to class "correct" or "+1" and "bad" PSMs to class "incorrect" or "-1". Different with typical classification problems, the target PSMs are not trustworthy, i.e., '+1' labels (corresponding to target PSMs) are not reliable. To overcome this problem, FC-Ranker introduces weight θ i ∈ [0,1] to indicate the reliability of i-th PSM, where 1 represents the highest confidence level and 0 the lowest confidence level. In fact, the learning model should rely more on reliable PSMs than untrustworthy ones.
Formally, the classification problem for peptide identification is described as follows. Given a set of l PSMs, denoted by = 1} be a set of l PSMs), where x i ∈ R q represents its i-th PSM record with q attributes, and y i = 1 or −1 is the corresponding label indicating a target or decoy PSM. Let SVM-based classifiers have shown its advantages in peptide identification [14,20]. A typical SVM finds a discriminant function Ψ by solving where c 1 > 0 is a constant, Loss(Ψ(x i ), y i ) is the loss function of (x i , y i ), and ||Ψ|| is the norm of Ψ for regularization. In FC-Ranker, θ i , i = 1, . . . , l are treated as parameters and it is a challenge to determine their values.
In [20], Problem (1) is solved by the linear programming SVM model as follows where a ∈ R l , b ∈ R 1 , ξ = [ξ 1 , . . . , ξ l ] ∈ R l , and r ∈ R 1 . Note that in this model, θ i is a parameter, and it is not trivial to choose a good one.

CRanker method CRanker classification model
In this section, we cope with weight θ i as a variable and reformulate Problem (1) as CRanker classification model. A new score scheme is developed for identifying correct PSMs based on CRanker solution. Note that all '−1' labels (decoy PSMs) are reliable, and hence, Moreover, we consider constraint i∈n + θ i ≥θ , wherē θ > 0 is a constant, to identify as many good PSMs as possible. Hence, we solve the following optimization problem: where c 1 > 0 is a constant. Technically, we move i∈ + θ i ≥θ to the objective function, and reformulate model (3) as where c 2 > 0 is a constant. By using the primal SVM technique [21], we formulate the CRanker classification model as where is a given kernel, K i denotes the i-th column of K. The solution of model (5) defines a discriminant function ( Choose parameters c 1 and c 2 Parameters c 1 and c 2 play a critical role in determining the value of discrimination function Ψ(x i ). We aim at Ψ(x i ) ≥ 0 if x i is a correct target PSM and Ψ(x i ) < 0 otherwise. Notice that y i ≥ 0 for target PSMs, and y i < 0 for decoys. We have y i Ψ(x i ) ≥ 0 for both correct target PSMs and decoys. Particularly, for x i with weight θ i , it contributes degree of confidence −c 2 θ i to the value of the objective function in problem (5). Meanwhile, x i generates an empirical loss In order to guarantee that the objective function of problem (5) decreases a certain amount, we enforce the loss θ i (c 1 h i − c 2 ) ≤ 0, which holds if and only if Hence, if parameters c 1 and c 2 satisfy Moreover, if we choose parameters c 1 and c 2 such that c 2 c 1 > 1, then there exists a degeneration risk that b = 0 and θ i = 1 for all i ∈ Ω + (i.e., all target PSMs are identified as correct), in which case we have objective function value l(c 1 − c 2 ) < 0. Therefore, we always select parameters c 2 ≤ c 1 in CRanker.

Cholesky factorization for large datasets
For large PSM datasets, the kernel matrix K ∈ R l×l is usually not sparse, and thus, it is a big challenge to load whole K in memory once. Usually, the number of sample features is much less than the number of samples, and kernel function k provides a convenient and cheap transformation. We aim to design a low-rank approximation of large kernel matrix K by Cholesky factorization, and request pairwise similarities between PSMs sequentially. Specifically, where L ∈ R l,r , L i,j = 0 if i <j, L 1,1 ≥ L 2,2 ≥ . . . ≥ L r,r are the square roots of the first largest r eigenvalues of K. The details can be referred to [22].
A large score value indicates the PSM is more likely to be correct. The PSMs are ordered according to their scores, and a certain number of PSMs are output to satisfy a pre-selected FDR.

Results and discussion
We evaluated the performance of CRanker by comparing it with PeptideProphet and Percolator based on PSMs generated from the SEQUEST search engine. The CRanker algorithm was implemented with Matlab version R2010b running on a PC with Intel Core i5-2400 CPU 3.10 GHz × 4 and 8 Gb RAM.

Experimental setup
Shotgun proteomics using multidimensional liquid chromatography coupled with tandem mass spectrometry were performed on all biological samples, including universal proteomics standard set (UPS1), the S. cerevisiae Gcn4 affinity-purified complex (Yeast), S. cerevisiae transcription complexes using the Tal08 minichromosome (Tal08) and Human Peripheral Blood Mononuclear Cells (PBMC). The RAW files generated from the different LC/MS/MS experiments were converted to mzXML format with the program ReadW. The MS/MS spectra were extracted from the mzXML file using the program MzXML2Search and all data was processed using the SEQUEST software. For PeptideProphet, we used the Trans Proteomic Pipeline V.4.0.2 (TPP), and the search outputs were converted to pep.XML format files using the TPP suite. For Percolator, we converted the SEQUEST outputs to a merged file in SQT format [23]. The  Table 1.
Each dataset was divided into a training set and a test set according to 50/50 ratio. For large-sized datasets, such as Tal08 and PBMCs, we randomly select 20,000 samples from the training set for model training. This procedure was repeated n times, and let Ψ i (x), i = 1, . . . , n be the discriminant function for the i-th time.
Then, discriminant function was employed in all experiments. We set as n = 6 in this paper. The PSM is represented by a vector of nine attributes: xcorr, deltacn, sprank, ions, hit mass, enzN, enzC, numProt, deltacnR. The first five attributes inherit from SEQUEST and the last four attributes are defined as • enzN: A boolean variable indicating whether the peptide is preceded by a tryptic site; • enzC: A boolean variable indicating whether the peptide has a tryptic C-terminus; • numProt: The number that the corresponding protein matches other PSMs; • deltacnR: deltacn/xcorr. Weight 1.0 was assigned for xcorr and deltacn, and 0.5 for all others. In CRanker learning model, we set parameter c 1 and c 2 as 1.0, p as 2 and choose the Gaussian kernel with kernel argument s = 1.0. Table 2 shows that the total numbers of PSMs identified by CRanker , Peptide-Prophet, and Percolator over all datasets (training and test) at F DR ≈ 0.05. As we can see, CRanker can identify more PSMs the other two algorithms. Table 3 shows the performance of CRanker on test dataset. The last column of Table 3 indicates the ratios of PSMs identified on test set and whole dataset. As the training data is randomly chosen, 50% ratio is an ideal scenario. On four PBMC datasets, the ratios are very close to 50%, indicating that CRanker classifier learned from training data works for the whole dataset. CRanker has shown very close learning performance on all datasets except UPS1. CRanker slightly overfitted on the test dataset of UPS1 (43.26%) as the training dataset is relatively small. We have also looked at overlapping PSMs among Pep-tideProphet, Percolator and CRanker. Figure 1 shows the overlap of the identified target PSMs by the three methods on ups1, yeast, tal08 and 4 PBMC datasets. On all the datasets, the target PSMs output by CRanker have large overlap with PeptideProphet and Percolator. The details are list in Table 4. On ups1, PeptideProphet has 497 (87.8%) target PSMs shared by CRanker; Percolator has 390 (89.0%) target PSMs shared by CRanker. On all the other 6 datasets, these percentages exceed 90%. The results indicate that the majority of PSMs validated by PeptideProphet and Percolator were also validated by CRanker.

Results
We finally compared the performance of CRanker, PeptideProphet, and Percolator by receiver operating characteristic (ROC). Due to the space limit, we included only ROCs on orbit-nomips ( Figure 2) and velos-nomips (Figure 3) datasets. As we can see, CRanker reaches highest true positive rates (TPRs) throughout all false positive rates (FPRs) levels among the three algorithms in both figures.

Stability of CRanker
As training data points are randomly chosen from training datasets, the performance of CRanker classifier may vary slightly. We counted the outputs of CRanker in 20 runs on orbit-mips and velos-mips datasets.
Let P i and #P i be the set of PSMs and the number of PSMs identified by CRanker at i-th run, i = 1, . . . , m. We compared the similarity of P i and P j , i ≠ j, i, j = 1, . . . , m by Then the stability of CRanker on a dataset is defined as the mean of all pairwise similarities over m runs: Table 5 and Table 6 show the numbers of PSMs identified by CRanker in 20 runs on orbit-mips and velosmips, respectively. The stability of CRanker is S = 99.17% on orbit-mips and S = 99.53% on velos-mips.

Conclusion
We have proposed a new scoring system CRanker for peptide identification, in which the confidence on each PSM is taken into account in the model training process. CRanker employs the primal SVM technique and copes with the weight of each PSM as a variable. We use the Cholesky factorization technique to improve memory utilization in model training for large