Data science and machine learning are becoming almost ubiquitous in many facets of our daily lives.
Our healthcare is no exception—and among the areas where these algorithms are poised to make a major impact is medical imaging.
Technologies that allow us to look inside the human body have made extraordinary gains since the first x-rays were used to identify broken bones more than a century ago. Now, hospitals use sophisticated MRIs and CT machines to look for things like cancers or other abnormalities. But one thing hasn’t changed much over the decades: It still takes a highly trained medical professional to examine and interpret those images, requiring enormous resources for training and time spent examining.
Researchers and doctors, however, are hoping machine learning will be a complimentary set of “eyes,” able to look faster and differently into the images to increase efficiency, reduce errors, and potentially identify features and emerging problems that humans alone can’t discern.
That’s why faculty across the University of Wisconsin-Madison, including researchers in the College of Engineering, School of Medicine and Public Health and Department of Computer Sciences are collaborating on an initiative called Machine Learning for Medical Imaging. The goal is to connect physicians and data scientists to build systems that can improve and automate tricky diagnoses.