Steven Bedrick

Multimodal Medical Image Retrieval OHSU at ImageCLEF 2008

Jayashree Kalpathy-Cramer, Steven Bedrick, William Hatt, William Hersh
Evaluating Systems for Multilingual and Multimodal Information Access, Jan 2009

Abstract

We present results from the Oregon Health & Science University's participation in the medical retrieval task of ImageCLEF 2008. Our web-based retrieval system was built using a Ruby on Rails framework. Ferret, a Ruby port of Lucene was used to create the full-text based index and search engine. In addition to the textual index of annotations, supervised machine learning techniques using visual features were used to classify the images based on image acquisition modality. Our system provides the user with a number of search options including the ability to limit their search by modality, UMLS-based query expansion, and Natural Language Processing-based techniques. Purely textual runs as well as mixed runs using the purported modality were submitted. We also submitted interactive runs using user specified search options. Although the use of the UMLS metathesaurus increased our recall, our system is geared towards early precision. Consequently, many of our multimodal automatic runs using the custom parser as well as interactive runs had high early precision including the highest P10 and P30 among the official runs. Our runs also performed well using the bpref metric, a measure that is more robust in the case of incomplete judgments.

Back to List