Cargando…
Ranking the whole MEDLINE database according to a large training set using text indexing
BACKGROUND: The MEDLINE database contains over 12 million references to scientific literature, with about 3/4 of recent articles including an abstract of the publication. Retrieval of entries using queries with keywords is useful for human users that need to obtain small selections. However, particu...
Autores principales: | , |
---|---|
Formato: | Texto |
Lenguaje: | English |
Publicado: |
BioMed Central
2005
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1274266/ https://www.ncbi.nlm.nih.gov/pubmed/15790421 http://dx.doi.org/10.1186/1471-2105-6-75 |
_version_ | 1782125976874909696 |
---|---|
author | Suomela, Brian P Andrade, Miguel A |
author_facet | Suomela, Brian P Andrade, Miguel A |
author_sort | Suomela, Brian P |
collection | PubMed |
description | BACKGROUND: The MEDLINE database contains over 12 million references to scientific literature, with about 3/4 of recent articles including an abstract of the publication. Retrieval of entries using queries with keywords is useful for human users that need to obtain small selections. However, particular analyses of the literature or database developments may need the complete ranking of all the references in the MEDLINE database as to their relevance to a topic of interest. This report describes a method that does this ranking using the differences in word content between MEDLINE entries related to a topic and the whole of MEDLINE, in a computational time appropriate for an article search query engine. RESULTS: We tested the capabilities of our system to retrieve MEDLINE references which are relevant to the subject of stem cells. We took advantage of the existing annotation of references with terms from the MeSH hierarchical vocabulary (Medical Subject Headings, developed at the National Library of Medicine). A training set of 81,416 references was constructed by selecting entries annotated with the MeSH term stem cells or some child in its sub tree. Frequencies of all nouns, verbs, and adjectives in the training set were computed and the ratios of word frequencies in the training set to those in the entire MEDLINE were used to score references. Self-consistency of the algorithm, benchmarked with a test set containing the training set and an equal number of references randomly selected from MEDLINE was better using nouns (79%) than adjectives (73%) or verbs (70%). The evaluation of the system with 6,923 references not used for training, containing 204 articles relevant to stem cells according to a human expert, indicated a recall of 65% for a precision of 65%. CONCLUSION: This strategy appears to be useful for predicting the relevance of MEDLINE references to a given concept. The method is simple and can be used with any user-defined training set. Choice of the part of speech of the words used for classification has important effects on performance. Lists of words, scripts, and additional information are available from the web address . |
format | Text |
id | pubmed-1274266 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2005 |
publisher | BioMed Central |
record_format | MEDLINE/PubMed |
spelling | pubmed-12742662005-10-29 Ranking the whole MEDLINE database according to a large training set using text indexing Suomela, Brian P Andrade, Miguel A BMC Bioinformatics Methodology Article BACKGROUND: The MEDLINE database contains over 12 million references to scientific literature, with about 3/4 of recent articles including an abstract of the publication. Retrieval of entries using queries with keywords is useful for human users that need to obtain small selections. However, particular analyses of the literature or database developments may need the complete ranking of all the references in the MEDLINE database as to their relevance to a topic of interest. This report describes a method that does this ranking using the differences in word content between MEDLINE entries related to a topic and the whole of MEDLINE, in a computational time appropriate for an article search query engine. RESULTS: We tested the capabilities of our system to retrieve MEDLINE references which are relevant to the subject of stem cells. We took advantage of the existing annotation of references with terms from the MeSH hierarchical vocabulary (Medical Subject Headings, developed at the National Library of Medicine). A training set of 81,416 references was constructed by selecting entries annotated with the MeSH term stem cells or some child in its sub tree. Frequencies of all nouns, verbs, and adjectives in the training set were computed and the ratios of word frequencies in the training set to those in the entire MEDLINE were used to score references. Self-consistency of the algorithm, benchmarked with a test set containing the training set and an equal number of references randomly selected from MEDLINE was better using nouns (79%) than adjectives (73%) or verbs (70%). The evaluation of the system with 6,923 references not used for training, containing 204 articles relevant to stem cells according to a human expert, indicated a recall of 65% for a precision of 65%. CONCLUSION: This strategy appears to be useful for predicting the relevance of MEDLINE references to a given concept. The method is simple and can be used with any user-defined training set. Choice of the part of speech of the words used for classification has important effects on performance. Lists of words, scripts, and additional information are available from the web address . BioMed Central 2005-03-24 /pmc/articles/PMC1274266/ /pubmed/15790421 http://dx.doi.org/10.1186/1471-2105-6-75 Text en Copyright © 2005 Suomela and Andrade; licensee BioMed Central Ltd. |
spellingShingle | Methodology Article Suomela, Brian P Andrade, Miguel A Ranking the whole MEDLINE database according to a large training set using text indexing |
title | Ranking the whole MEDLINE database according to a large training set using text indexing |
title_full | Ranking the whole MEDLINE database according to a large training set using text indexing |
title_fullStr | Ranking the whole MEDLINE database according to a large training set using text indexing |
title_full_unstemmed | Ranking the whole MEDLINE database according to a large training set using text indexing |
title_short | Ranking the whole MEDLINE database according to a large training set using text indexing |
title_sort | ranking the whole medline database according to a large training set using text indexing |
topic | Methodology Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1274266/ https://www.ncbi.nlm.nih.gov/pubmed/15790421 http://dx.doi.org/10.1186/1471-2105-6-75 |
work_keys_str_mv | AT suomelabrianp rankingthewholemedlinedatabaseaccordingtoalargetrainingsetusingtextindexing AT andrademiguela rankingthewholemedlinedatabaseaccordingtoalargetrainingsetusingtextindexing |