Published on
Assessing the quality of imaging interpretations requires that the results of radiological interpretations be compared with those of subsequent surgical-pathology results, when available. The manual process is inherently slow, tedious and expensive, and unless systematic errors occur in the interpretations, discrepancies are unlikely to be detected. Classical computational methods using Natural Language Processing entail using a corpus of annotated documents for model development and evaluation. However, developing such a corpus is also very expensive and time consuming, and the output is usually not portable between medical institutions. To alleviate these issues we are proposing a statistical learning method for textual feature extraction using a document-level agreement score as a response variable. The initial model’s predictor variables consisted of binary representations of keywords extracted from the radiology report and mapped to ontological semantic classes (anatomical entity, imaging observation, imaging observation size, imaging observation margin, patho-physiological condition). The current approach still requires some manual feature curation, especially of the interaction terms. We are proposing to enhance the model by including as predictors OpenIE relational tuples, typically binary relations, extracted from the raw text, normalized by mapping them to controlled vocabularies.
Please contact Robert Sanders (sandersrl@missouri.edu) for Zoom information.