Researchers design biomedical-image databases with physician input and eyetracking data to assist in teaching and diagnostics

The medical field is awash in digital images. Doctors rely on X-rays, CT Scans, MRIs and standard digital images to make diagnoses and to teach students. But cluttered databases pose a challenge to medical professionals using these resources and for information technologists who create the systems.

“One of the problems of having all these images is how do we retrieve them?” says Anne Haake, professor of information sciences and technologies in the B. Thomas Golisano College of Computing and Information Sciences. “There is a real need to make those images useful. It’s all part of the push to use computers to improve the way medicine is practiced.”

While on sabbatical at the National Institutes of Health’s National Library of Medicine, Haake saw the need for image databases built on input from the intended end-users and designed from the beginning with flexible interfaces. To address this, Haake is applying funding she won from the National Science Foundation and the NIH to develop a prototype database using input from dermatologists on images of skin conditions. The NSF is funding visual perception research using eye tracking and the design of a content-based image retrieval system accessible through touch, gaze, voice and gesture; the NIH portion will be used to fuse image understanding and medical knowledge.

Focusing on dermatology draws on Haake’s previous experiences as a developmental biologist at the University of Rochester, where she conducted skin research before pursuing computing and biomedical informatics.

Joining Haake is Cara Calvelli, M.D., a former UR colleague and a professor in RIT’s physician assistant program. Calvelli has recruited dermatologists, residents and physician assistant students for the project. She is also helping to properly describe the sample images, some of which come from her own collection.

Haake’s team includes Jeff Pelz, co-director of the Multidisciplinary Vision Research Laboratory in the Chester F. Carlson Center for Imaging Science, who is leading the eye-tracking effort; and Pengcheng Shi, director for Graduate Studies and Research in the Golisano College, who is providing expertise in image understanding. Graduate students Sai Mulpura, Preethi Vaidyanathan and Rui Li are also instrumental to the project.

According to Haake, bridging the “semantic gap” is the challenge facing researchers working in content-based image retrieval. Search functions can go awry when computer engineered algorithms trip on nuances and fail to distinguish between disparate objects, such as a whale and a ship. Building a system based on the end-user’s knowledge can prevent semantic hiccups from occurring.

The project will explore eye tracking as a way to identify what an expert finds perceptually important. Watching where a physician looks when making a diagnosis can reveal the key regions in an image. Pairs of 16 dermatologists and PA students viewed skin conditions in 50 different images displayed on a monitor. A device attached to the monitor recorded the physicians’ eye movements as they lingered on the critical regions in each image. At the same time, vocabulary mined from audio recordings of the physicians’ explanations to the students will form the common search words in the database.

Identifying the relevant features in the images provided by Calvelli and Logical Images Inc., a Rochester-based company, will help Haake’s team improve the accuracy and efficiency of retrieving images from the database. Algorithms based on the eye-tracking data will compare similarities and differences in subject matter, color, contrast, size and shape—what the dermatologists focused on during the eye-tracking observations.

“This is very specialized for dermatology, but the one thing we want to establish is that this may be a better paradigm for developing systems in terms of involving the end-user in the development of these systems and some of the methodologies,” Haake says.

“The best way to learn is to see patients again and again with various disorders,” says Cara Calvelli, M.D., dermatologist and professor in the physician assistant program. “When you can’t get the patients themselves, getting good pictures and learning how to describe them is second best.”

“All the research organizations these days are looking for interdisciplinary collaboration,” says Anne Haake, professor of information sciences technology. “And the real reason is because there’s too much expertise in all these different areas that needs to come together. No one person can do it. It’s not just that it’s fashionable; it’s really needed to have in-depth expertise in all these areas.”

“For many years computing/technical people have said we can write algorithms such that it will work,” says Pengcheng Shi, director for Graduate Studies and Research in Golisano College of Computing and Information Sciences. “But people start to realize that machines are not all that powerful. At the end of the day we need to put the human back into it. What are the physicians looking at and how are they looking at it in order to make their decisions?”

“People move their eyes 150,000 times a day, but you don’t spend time thinking about where you will move your eyes next and you don’t waste any memory remembering where your eyes have been,” says Jeff Pelz, co-director of the Multidisciplinary Vision Research Laboratory in the Chester F. Carlson Center for Imaging Science. “You just move your eyes to the next place you need information and a fraction of a second later you move them again.”