Volume 14 Supplement 5
Literature classification for semi-automated updating of biological knowledgebases
© Olsen et al.; licensee BioMed Central Ltd. 2013
Published: 16 October 2013
As the output of biological assays increase in resolution and volume, the body of specialized biological data, such as functional annotations of gene and protein sequences, enables extraction of higher-level knowledge needed for practical application in bioinformatics. Whereas common types of biological data, such as sequence data, are extensively stored in biological databases, functional annotations, such as immunological epitopes, are found primarily in semi-structured formats or free text embedded in primary scientific literature.
We defined and applied a machine learning approach for literature classification to support updating of TANTIGEN, a knowledgebase of tumor T-cell antigens. Abstracts from PubMed were downloaded and classified as either "relevant" or "irrelevant" for database update. Training and five-fold cross-validation of a k-NN classifier on 310 abstracts yielded classification accuracy of 0.95, thus showing significant value in support of data extraction from the literature.
We here propose a conceptual framework for semi-automated extraction of epitope data embedded in scientific literature using principles from text mining and machine learning. The addition of such data will aid in the transition of biological databases to knowledgebases.
KeywordsText mining machine learning biological databases automation
Databases are the cornerstone of bioinformatics analyses. Experimental methods keep advancing and high-throughput methods keep increasing in volume, the number of biological data repositories are growing rapidly . Similarly, the quantity and complexity of the data are growing requiring both the refinement of analyses and higher resolution and accuracy of results. In addition to the most commonly used biological data types such as sequence data (gene and protein), structural data, and quantitative data (gene and protein expression), the increasing amount of high-level functional annotations of biological sequences are needed to enable detailed studies of biological systems. These high-level annotations are also captured in the databases, but to a much smaller degree than the essential data types. The literature, however, is a rich source of functional annotation information, and combining these two types of sources provides a body of data, information, and knowledge needed for practical application in bioinformatics and clinical bioinformatics. Extraction of knowledge from these sources is facilitated through emerging knowledgebases (KB) that enable not only data extraction, but also data mining, extraction of patterns hidden in the data, and predictive modeling. Thus, KB bring bioinformatics one step closer to the experimental setting compared to traditional databases since they are intended to enable summarization of hundreds of thousands of data points and in silico simulation of experiments all in one place.
A knowledge-based system (KBS) is a computational system that uses logic, statistics and artificial intelligence tools for support in decision making and solving complex problems. The KBS include specialist databases designed for data mining tasks and knowledge management databases (knowledgebases). A KBS is a system comprising a KB, a set of analytical tools, a logic unit, and user interface. The logic unit connects user queries and determines, using workflows, how analytical tools are applied to the knowledge base to perform the analysis and produce the results. Primary sources such as UniProt  or GenBank , as well as specialized databases such as The Influenza Research Database (IRD)  and the Los Alamos National Laboratory HIV Databases (http://www.hiv.lanl.gov/), offer a number of integrated tools and annotated data, but their analytical workflows are limited to basic operations. Examples of more advanced KBS include FlaviDb a KBS of flavivirus antigens, , FluKB a KBS of influenza antigens (http://research4.dfci.harvard.edu/cvc/flukb/), and TANTIGEN a KBS of tumor antigens (http://cvc.dfci.harvard.edu/tadb/index.html). KBS focus on a narrow domain, and a set of analytical tools to perform complex analyses and decision support. KBS must contain sufficient data, and annotations to enable data mining for summarization, pattern discovery and building of models that simulate behavior of real systems. For example FlaviDb, enables summarization of diversity of sequences for more than 50 species of flaviviruses. It also enables the analysis of the complete set of predicted T cell epitopes for 15 common HLA alleles and has the capacity to display the complete landscape of both predicted and experimentally verified HLA associated peptides. The extension of antigen analysis functionalities with FluKB enables analysis of cross-reactivity of all entries for neutralizing antibodies. Both these examples focus on identification, prediction, variability analysis and cross-reactivity of immune epitopes. The implementation of workflows in these KBS enables complex analyses to be performed by filling a single query form and results are presented in a single report.
To get high quality results, we must ensure that KBS are up to date and error-free (to the extent possible). Since the information in KBS is derived from multiple sources, providing high quality updates is complex. Manual updating of KBS is impractical, so automation of the updating process is needed. Automated updating of data and annotation by extracting data from primary databases such as UniProt, GenBank, or IEDB is relatively simple since these sources enable export of data using standardized formats, mainly XML. Ideally, functional annotations will be deposited by direct submission to appropriate databases by the discoverers, but a historical lack of submission standards for higher-level biological data, has lead to the vast majority of this information being recording only in primary scientific literature. The use of data embedded in primary scientific literature accessible through PubMed or Google Scholar, is markedly more complex. The information stored in abstracts or full texts is, at best, semi-structured, but typically it is provided as free text. Given that as many as tens of thousands of articles may be published each year on a given topic, access to this information and assessment of its relevance require efficient methods for identification of publications of interest and rapid assessment of their suitability for inclusion in the KBS. Such analysis is facilitated through use of text mining techniques, ranging from simple statistical pattern learning based on term frequencies, to complex natural language processing techniques in order to produce text categorization, document summarization, information retrieval, and ultimately the data mining . A long-term solution for this issue invariably involves standardizing submission and storage of complex biological data, but the knowledge currently embedded in the literature remains available for extraction. Text mining operations have previously been applied for specific knowledge extraction for vaccine development , as well as document classification for separation of abstracts by topic  and for semi-automated extraction of allergen cross-reactivity information . In this article, we will define the conceptual framework for semi-automated updating of our tumor antigen knowledgebase, TANTIGEN, using data parsing, basic text mining operations, and a standardized submission system.
Results and discussion
Depending on the content of the KBS one wishes to update, there are issues pertaining to the complexity of biological data that require considerations. Particularly we must address the diversity of data types, diversity of data formats, dispersion of data across different sources, and size of data sets. There are many biological data types - the most common include sequence data (nucleotide or protein), molecular structures, expression data, and functional annotations. Data can be stored and retrieved either as structured text, table formats, semantic web formats (such as RDP, OWL, or XML), or non-structured text. Depending on the target data format, retrieval can be performed by direct extraction, parsing, text mining, or manual extraction. Text mining, manual extraction, or a combination of these two is common in extracting the high-level data, such as functional annotations. Data availability and individual entry size vary between different data types, presenting a computational challenge in terms of retrieval, handling, analysis, and storage. Additional factors that affect the complexity of the updating task are data heterogeneity, integration of multiple data types after retrieval, as well as provenance tracking for quality assessment .
Step 1: Produce status report of current knowledgebase build. This report will serve as the filter for the two main updating tasks: update of existing entries and update of data body by introduction of new entries.
Step 2: Automatic download of data from selected sources. Most biological data repositories enable full download of latest database build and most allow automated retrieval via GNU Wget or FTP clients. If automatic download is not possible, this step can be performed manually.
Step 3: Automatic data pre-processing. Depending on the data format, pre-processing steps can be automated in various ways. For simple syntax-based formats such as XML, parsing of desired data is possible, where for non-standardized formats, such as raw text, pre-processing involves tasks derived from text mining, such as word stemming, stop word removal, and generation of document-term matrix (DTM) .
Step 4: Text categorization. If the desired information is not available in a standardized format - for example that it is only available in primary scientific literature, the text mining or machine learning methods can be applied to direct and streamline the manual extraction. A text corpus may contain documents that fall into two or more categories, of which only one or a few are of interest for a given task. To maximize the efficiency of manual data extraction, it is helpful to classify documents before embarking on data extraction. Options for classification using machine learning methods include: unsupervised methods such as clustering and blind signal separation, or supervised methods such as artificial neural networks, support vector machines, nearest neighbor methods, Naive Bayes, decision trees, among others . For some of these algorithms, feature extraction using matrix factorization methods, such as principal component analysis (singular value decompression) can be useful to reduce dimensionality of DTM, which can become quite large.
Step 5: Manually extract data and information from categorized texts. Some higher-level data types, such as functional annotations, are often found in tables, figures, legends, or supplementary materials of primary scientific articles, making automated extraction of this information highly complex or practically impossible . A manual extraction step may therefore be needed and simultaneously allow for quality control.
Step 6: Submission of new or updated entries to the KBS. Submission of extracted data to the KBS should be standardized to the highest degree possible in order to ensure the adherence to standardized format and quality of an entry. The use of a standardized submission form allows non-experts to perform the task of updating. Automated extraction of related data from primary databases can minimize the manual entry of data and mismatches between existing entries addition to entries, provide automated error detection to be manually addressed.
Step 7: Refining categorization by increasing the training corpus. Each manually inspected document (classified either as relevant or irrelevant) represents a new addition to the training data used for documentation categorization. In addition to refining the model and improving performance, a feedback loop to the classification module reduces the need for a large initial training corpus.
Case study: TANTIGEN tumor T cell antigen database
Selection of useful tumor T cell antigens represents a major bottleneck to the study and design of cancer immunotherapies. The methods of selecting immunotherapy targets involve the selection of antigens and the analysis of their immune epitopes. This process has been greatly enhanced by the use of computational immunology methods . However, as computational efforts produce vast amounts of potential targets, the bottleneck is shifted to the wet lab, where the vaccine target candidates must be validated for both relevance and immunogenicity before they are included in potential vaccine constructs. Great advances have been made in techniques for high-throughput epitope validation [13, 14], but as computational methods grow ever more powerful, so does the need for post-analysis verification of results. Efficient cataloguing of experimentally validated epitopes for cross-referencing of new predictions with past experimental data is a valuable resource that could reduce the need for and streamline further experimentation. Several specialized resources for this and similar purposes have been established, for example: IRD , The HIV databases (http://www.hiv.lanl.gov), Human Papillomavirus T cell Antigen Database for HPV (http://cvc.dfci.harvard.edu/hpv/index.html), as well as general HLA binder repositories such as SYFPEITHI  and the Immune Epitope Database (IEDB) .
The TANTIGEN database was established in 2007 as a tumor-specific T cell antigen database. It provides the scientific community with a curated repository of experimentally validated tumor T-cell antigens, and matched T-cell epitopes and HLA binders. Each antigen entry contains detailed information about somatic mutations from the Catalogue of Somatic Mutations in Cancer (COSMIC) , splice isoforms from UniProt/Swiss-Prot, gene expression profiles from UniGene, and known T-cell epitopes from secondary databases or literature. Additionally, TANTIGEN is equipped with a number of analysis tools such as BLAST search , multiple sequence alignment using MAFFT , T-cell epitope/HLA ligand prediction [20, 21] and visualization, and tumor antigen classification .
Keeping up-to-date data in a KBS represents a major bottleneck in the maintenance of TANTIGEN. In 2012, 7,322 articles responding to the keywords "tumor antigen" were indexed in PubMed. Although many of these articles may not contain tumor T cell antigens, the growing quantities of literature represents a major bottleneck in the maintenance of curated databases .
The data types to be updated in TANTIGEN are experimentally characterized T cell epitopes and HLA ligands, and expression and variability information for the proteins that harbor them. In build 1 of TANTIGEN, these data were collected from six different sources: manual collection from the literature, the Peptide database: T cell-defined tumor antigens (http://www.cancerimmunity.org/peptide/), the listing of human tumor antigens recognized by T cells by Parmiani and colleagues [23, 24], and parsing from IEDB, as well as four other public databases that are outdated or unavailable at present. The primary resource for these data remains manual collection from the literature, as no primary database is actively collecting or curating tumor antigen data. IEDB offers some curated cancer data (2.7% of available data curated as of November 2009 ), but in their February 2011 newsletter they announced that they will no longer curate cancer tumor epitope data.
Preliminary filtering of literature
Examples of PubMed results from a selection of keyword searches (publication data from December 1, 2009 - March 29, 2013).
cancer OR tumor OR antigen OR epitope
(tumor OR cancer) AND (antigen OR epitope)
tumor AND antigen
tumor AND antigen AND epitope
tumor AND antigen AND epitope AND T cell
Formal approach to updating
Step 1: Status report of TANTIGEN build 1. The status report for TANTIGEN lists 251 unique proteins and corresponding UniProt accession numbers. Many of these proteins have multiple splice isoforms for which UniProt accession numbers are also listed. All UniProt accession numbers are listed as these entries are subject to updating by direct parsing from UniProt data downloads. Similarly, PubMed IDs are listed for all referenced articles. These articles represent relevant literature and corresponding abstracts can be directly parsed from the PubMed abstract download to the training document set. The build 1 of TANTIGEN has 4,006 curated antigen entries.
Step 2: Automatic data download. The latest versions of UniProt and COSMIC are downloadable as XML files from the database web sites. PubMed results can be narrowed down by search term, in this case we used "(cancer OR tumor) AND (antigen OR epitope)", but this can be refined in later iterations if suitable. Due to the very high volume of abstracts in PubMed, query results can also be filtered by date, and we here filtered out articles published before the last TANTIGEN update. Search results are downloadable in XML format.
Step 3: Automatic data pre-processing. The COSMIC and UniProt XML downloads needed no further pre-processing for parsing. The PubMed abstracts were extracted from the XML and parsed into a text corpus format for pre-processing. The following tasks were performed on the corpus: lower case transformation, removal of stop words, removal of general punctuation, word stemming, and white space stripping. The numbers are usually removed in text mining preprocessing, but it was not done here because we needed to preserve the terms defining HLA alleles, CD receptors, and other immunologically relevant descriptors.
Step 4: Abstract categorization. The resulting DTM was Tf-Idf transformed, and each abstract was classified using a k-Nearest Neighbor (k-NN) classifier trained on 226 manually pre-classified abstracts. Iterative refinement of the algorithm showed that a six nearest neighbors model yielded the best results. Each abstract in the corpus was given a probability score based on the ratio of relevant neighbors in the model. The output list was ordered from most probable to least, thus eliminating the need to define a static threshold.
Step 5: Manually extract antigen data from literature. The articles corresponding to each abstract classified as relevant were accessed through PubMed or publishing journal. Epitopes, HLA ligands and related data, such as HLA restriction and protein of origin, were extracted. For TANTIGEN build 2, we manually searched the top 273 articles out of classified 48,130 articles. The cutoff of 273 articles was chosen when article relevance started decreasing drastically in the ordered list during manual data extraction.
Step 6: Submission of data. Submission was done by filling out a standardized TANTIGEN submission form for each antigen. Additional information was parsed directly from the downloaded UniProt XML, based on the protein of origin. Similarly, mutation entries and splice variants were automatically linked by cross-referencing with COSMIC XML. Entries in TANTIGEN were automatically linked to each other where applicable (splice isoforms, mutation entries, etc.). Updating of existing entries was performed by automated parsing form UniProt XML, as some entries were removed, assigned new accession, updated with more splice isoforms. This step also serves as a error detection: if an existing entry in TANTIGEN does not match the information entered in the standardized submission form, the user is notified and prompted to determine whether the existing entry, the submission, or both are erroneous. Similarly, if protein information extracted from UniProt does not match that in COSMIC, the user will be prompted to resolve the issue, thus increasing data quality.
Step 7: Refine training set with new entries. The TANTIGEN submission form has an addition field, where the curator performing the manual submission is prompted to classify the article as "relevant" or "irrelevant". This feature was used to feed manually inspected abstracts back into the training corpus, to increase its size and thus performance. The false positives and false negatives were fed back, but only a randomly selected fraction of true positives and true negatives were fed back into the training corpus, as these may further bias a potentially already biased model.
Results of TANTIGEN update
Accuracy of classification
The average accuracy in the five-fold cross-validation training of the k-NN model with 6 nearest neighbors was 0.95 with sensitivity of 0.96 and specificity of 0.93. Model performance is likely to increase with the increase of training set size, and particularly the addition of false positives from the manual extraction step. True positive should also be added to the training corpus, but including all true positives may further bias a potentially biased model. Special care should be taken in initial classification rounds to extract and include false negatives, as low sensitivity is highly detrimental to the quality and completeness of the update. Wrongfully discarding relevant literature will not only lead to, potentially permanent, loss of valuable data, but also negatively affect classifier performance, when misclassified training data is fed back into the model.
Results of manual extraction of tumor T-cell antigens
Manual extraction of new antigenic proteins and tumor T-cell antigens was performed from the classified literature. Since classification was based on the six nearest neighbors, the body of classified abstracts was divided in seven groups, corresponding to whether an abstract had from zero to six relevant neighbors in the training set. Out of the 48,130 classified abstracts, 117 had six relevant neighbors, 156 had five, 212 had four, 859 had three, 3,489 had two, 12,738 had one, and 30,856 abstracts had zero relevant neighbors. We manually examined the top 273 scoring papers in which we found 13 new antigenic proteins harboring 32 new tumor T-cell epitopes. Additionally, we found more than 100 new T-cell epitopes discovered in proteins already recorded as tumor antigens in TANTIGEN.
Training set refinement iteratively increase classification accuracy
Abstract category signatures
Specialized biological databases are gradually moving from data repositories towards knowledge-based systems. Enriching basic biological data with higher-level functional annotations and facilitating specialized analyses in organized workflows enables extraction of higher-level knowledge. Currently, however, functional annotations are primarily stored in the literature, rather than in standardized formats of primary biological databases. As the quantity of this information increases, easy access to multiple layers of biological data and information enables improved extraction of knowledge, thus increasing the value to the user.
We here present a conceptual framework for automating the process of updating biological databases and knowledgebases with standardized non-standardized data from both primary and secondary data repositories, as well as literature. We deployed a text mining-based approach to categorize literature, based on defining term signatures of freely available article abstracts, which enable significantly faster manual extraction of relevant data. We have applied this conceptual framework to literature for updating the TANTIGEN KBS of tumor T cell antigens. Training of a k-NN classifier on 260 abstracts yielded classification accuracy of 0.95, thus showing significant value in support of data extraction from the literature.
All three databases offer download in XML format, where the desired information was directly parsable from UniProt/Swiss-Prot and COSMIC, but only abstracts were available for PubMed entries and protein information and epitopes from these entries required manual extraction. To aid the process of KB update, text mining tools and machine learning tools were employed to filter text entries as either relevant (containing T cell epitopes) or irrelevant (not containing T cell epitopes).
Classification of literature abstracts
A corpus for classification was extracted from PubMed using the search terms "(tumor OR cancer) AND (antigen OR epitope)". Each entry in the corpus contains the article abstract, the titles, and the MeSH terms. Before classification, a number of term transformation steps were taken: lower case transformation, removal of numbers, removal of stop words, removal of punctuation, word stemming, synonym consolidation using the WordNet database , and white space removal. Term pre-processing of text corpus was done using the R package tm [27, 28]. After term counting, term frequency-inverse document frequency (Tf-Idf) transformation was applied for background correction .
Abstracts were classified using the k-NN algorithm  from the R package class. The classifier was trained and performance evaluated for 1-155 nearest neighbors using five-fold cross-validation on a set of 310 abstracts (155 abstracts of irrelevant articles and 155 abstracts of relevant articles). This training set was manually assembled for initial training. Classification was done based on 6 neighbors in the k-NN algorithm, since this number of neighbors proved most accurate.
Abstract category signatures
A signature of the top ten terms most discriminating between relevant and irrelevant literature was extracted by t-test of differential term occurrence in relevant and irrelevant abstracts. The average term count was calculated for the ten most discriminating terms, i.e. the terms with the lowest p-values.
Availability of supporting data
The TANTIGEN training corpus is available in raw form at http://cvc.dfci.harvard.edu/tadb/download/
List of abbreviations used
Term frequency-inverse document frequency
- k-NN k:
-Nearest Neighbor algorithm
Extensible Markup Language.
LRO and OW acknowledge funding from the Novo Nordisk Foundation; UJK acknowledge funding from the Oticon Foundation.
Publication of this article was funded by the Novo Nordisk Foundation.
This article has been published as part of BMC Genomics Volume 14 Supplement 5, 2013: Twelfth International Conference on Bioinformatics (InCoB2013): Computational biology. The full contents of the supplement are available online at http://www.biomedcentral.com/bmcgenomics/supplements/14/S5.
- Fernández-Suárez XM, Galperin MY: The 2013 Nucleic Acids Research Database Issue and the online molecular biology database collection. Nucleic acids research. 2013, 41: D1-7. 10.1093/nar/gks1297.PubMedPubMed CentralView ArticleGoogle Scholar
- Magrane M, Consortium U: UniProt Knowledgebase: a hub of integrated protein data. Database: the journal of biological databases and curation. 2011, 2011: bar009-PubMedView ArticleGoogle Scholar
- Benson Da, Karsch-Mizrachi I, Clark K, Lipman DJ, Ostell J, Sayers EW: GenBank. Nucleic acids research. 2012, 40: D48-53. 10.1093/nar/gkr1202.PubMedPubMed CentralView ArticleGoogle Scholar
- Squires RB, Noronha J, Hunt V, García-Sastre A, Macken C, Baumgarth N, Suarez D, Pickett BE, Zhang Y, Larsen CN, Ramsey A, Zhou L, Zaremba S, Kumar S, Deitrich J, Klem E, Scheuermann RH: Influenza Research Database: an integrated bioinformatics resource for influenza research and surveillance. Influenza and other respiratory viruses. 2012Google Scholar
- Olsen LR, Zhang GL, Reinherz EL, Brusic V: FLAVIdB: A data mining system for knowledge discovery in flaviviruses with direct applications in immunology and vaccinology. Immunome research. 2011, 7: 1-9.Google Scholar
- Sebastiani F: Machine learning in automated text categorization. ACM Computing Surveys. 2002, 34: 1-47. 10.1145/505282.505283.View ArticleGoogle Scholar
- Schönbach C, Nagashima T, Konagaya A: Textmining in support of knowledge discovery for vaccine development. Methods (San Diego, Calif.). 2004, 34: 488-95. 10.1016/j.ymeth.2004.06.009.View ArticleGoogle Scholar
- Goetz T, Von der Lieth C-W: PubFinder: a tool for improving retrieval rate of relevant PubMed abstracts. Nucleic acids research. 2005, 33: W774-8. 10.1093/nar/gki429.PubMedPubMed CentralView ArticleGoogle Scholar
- Miotto O, Tan TW, Brusic V: Supporting the curation of biological databases with reusable text mining. Genome informatics. International Conference on Genome Informatics. 2005, 16: 32-44.PubMedGoogle Scholar
- Zhao J, Miles A, Klyne G, Shotton D: Linked data and provenance in biological data webs. Briefings in bioinformatics. 2009, 10: 139-52. 10.1093/bib/bbn044.PubMedView ArticleGoogle Scholar
- Mierswa I, Wurst M, Klinkenberg R, Scholz M: YALE: Rapid Prototyping for Complex Data Mining Tasks. Proceeding KDD '06 Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining. 2006, 935-940.View ArticleGoogle Scholar
- Brusic V, August JT, Petrovsky N: Information technologies for vaccine research. Expert review of vaccines. 2005, 4: 407-17. 10.1586/147605220.127.116.117.PubMedView ArticleGoogle Scholar
- Wulf M, Hoehn P, Trinder P: Identification of human MHC class I binding peptides using the iTOPIA-epitope discovery system. Methods in molecular biology (Clifton, N.J.). 2009, 524: 361-7. 10.1007/978-1-59745-450-6_26.View ArticleGoogle Scholar
- Andersen RS, Kvistborg P, Frøsig TM, Pedersen NW, Lyngaa R, Bakker AH, Shu CJ, Straten PT, Schumacher TN, Hadrup SR: Parallel detection of antigen-specific T cell responses by combinatorial encoding of MHC multimers. Nature protocols. 2012, 7: 891-902. 10.1038/nprot.2012.037.PubMedView ArticleGoogle Scholar
- Schuler MM, Nastke M-D, Stevanovikć S: SYFPEITHI: database for searching and T-cell epitope prediction. Methods in molecular biology. 2007, 409: 75-93. 10.1007/978-1-60327-118-9_5.PubMedView ArticleGoogle Scholar
- Vita R, Zarebski L, Greenbaum Ja, Emami H, Hoof I, Salimi N, Damle R, Sette A, Peters B: The immune epitope database 2.0. Nucleic acids research. 2010, 38: D854-62. 10.1093/nar/gkp1004.PubMedPubMed CentralView ArticleGoogle Scholar
- Forbes SA, Bhamra G, Bamford S, Dawson E, Kok C, Clements J, Menzies A, Teague JW, Futreal PA, Stratton MR: The Catalogue of Somatic Mutations in Cancer (COSMIC). Current protocols in human genetics/editorial board, Jonathan L. Haines ... [et al.]. 2008, Chapter 10 (Unit 10.11):
- Altschul SF, Gish W, Miller W, Myers EW, Lipman DJ: Basic local alignment search tool. Journal of molecular biology. 1990, 215: 403-10. 10.1016/S0022-2836(05)80360-2.PubMedView ArticleGoogle Scholar
- Katoh K, Toh H: Recent developments in the MAFFT multiple sequence alignment program. Briefings in bioinformatics. 2008, 9: 286-98. 10.1093/bib/bbn013.PubMedView ArticleGoogle Scholar
- Nielsen M, Lundegaard C, Lund O: Prediction of MHC class II binding affinity using SMM-align, a novel stabilization matrix alignment method. BMC bioinformatics. 2007, 8: 238-10.1186/1471-2105-8-238.PubMedPubMed CentralView ArticleGoogle Scholar
- Lundegaard C, Lamberth K, Harndahl M, Buus S, Lund O, Nielsen M: NetMHC-3.0: accurate web accessible predictions of human, mouse and monkey MHC class I affinities for peptides of length 8-11. Nucleic acids research. 2008, 36: W509-12. 10.1093/nar/gkn202.PubMedPubMed CentralView ArticleGoogle Scholar
- Van den Eynde BJ, Van der Bruggen P: T cell defined tumor antigens. Current opinion in immunology. 1997, 9: 684-93. 10.1016/S0952-7915(97)80050-7.PubMedView ArticleGoogle Scholar
- Renkvist N, Castelli C, Robbins PF, Parmiani G: A listing of human tumor antigens recognized by T cells. Cancer immunology, immunotherapy: CII. 2001, 50: 3-15. 10.1007/s002620000169.PubMedView ArticleGoogle Scholar
- Novellino L, Castelli C, Parmiani G: A listing of human tumor antigens recognized by T cells: March 2004 update. Cancer immunology, immunotherapy: CII. 2005, 54: 187-207. 10.1007/s00262-004-0560-6.PubMedView ArticleGoogle Scholar
- Lu Z: PubMed and beyond: a survey of web tools for searching biomedical literature. Database: the journal of biological databases and curation. 2011, 2011: baq036-PubMedView ArticleGoogle Scholar
- Fellbaum C: WordNet(s). In Encyclopedia of Language & Linguistics. Second edi. edited by Brown K Amsterdam: Elsevier Ltd. 2006, 13: 665-670.View ArticleGoogle Scholar
- Feinerer I: Introduction to the tm Package Text Mining in R. R vignette. 2011, 1-8.Google Scholar
- Feinerer I, Hornik K, Meyer D: Text Mining Infrastructure in R. Journal of Statistical Software. 2008, 25:Google Scholar
- Jones KS: A statistical interpretation of term specificity and its application in retrieval. Journal of Documentation. 1972, 28: 11-21. 10.1108/eb026526.View ArticleGoogle Scholar
- Cover TM, Hart PE: Nearest neighbor pattern classification. IEEE Transactions on Information Theory. 1967, 13: 21-27.View ArticleGoogle Scholar
This article is published under license to BioMed Central Ltd. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.