Microsoft’s Strategy for Finding What’s Next in Healthcare AI
Microsoft is applying the same model it used to launch its quantum computing and chatbot efforts to innovation in healthcare, signaling the company’s ambition and optimism about the transformative potential of artificial intelligence technologies in this enormous endeavor.
Peter Lee is the Microsoft Research executive in charge of the NExT initiative—an effort quietly begun in 2014 by CEO Satya Nadella and Harry Shum, head of the company’s A.I. division, to convert select projects from the company’s global research apparatus into business opportunities.
Lee says he and his team work like venture capital investors inside Microsoft (NASDAQ: MSFT), evaluating what amount to startup pitches from researchers and other employees of the company. The people behind the best ideas are given internal resources—engineering talent and capital—and a structure “to help them grow with the ultimate goal of compelling Microsoft to make some big new bets,” he says. “The culture is that you are running a startup here and your intention is to grow big and take this all the way.”
Graduates of the NExT initiative include Microsoft’s push into quantum computing, its Bot Framework, work on reconfigurable computer chips, and systems to enable real-time A.I. applications.
About 10 months ago, Nadella tasked Lee with applying this model to help organize and accelerate the company’s work in healthcare. Lee, who will share more details at Xconomy’s sold-out Healthcare + A.I. Northwest conference in Seattle on Thursday, often likens the task—and the enormity of the healthcare industry—to “being thrown into the middle of the Pacific Ocean and asked to find land.”
Microsoft already has a huge healthcare business, and a long history of innovation efforts, with varying degrees of success. Well over 100,000 healthcare enterprises are paying customers, Lee says. “The entire universe of electronic health records systems runs almost exclusively on Microsoft SQL Server,” he says. “So, the whole digitization of health records is fundamentally running on Microsoft products.”
The Healthcare NExT effort, as the name suggests, is focused on the future, “and a bet that the cloud and A.I. services in the cloud are going to be really meaningful for improving healthcare—making it more efficient, providing better access for people,” Lee says.
Here are excerpts from an interview with Lee last week, in which he discusses a new partnership to design “digital hospitals of the future”; HIPAA-compliant healthcare chat bots; near-term A.I. applications in radiology; the healthcare data challenge; and how his own view of technology has evolved from unbridled optimism to greater caution.
Focus areas: Lee says Healthcare NExT groups its projects into three large areas: Near-term applications of A.I. to clinical practice; life sciences applications in areas such as genomics, which lend themselves to cloud-first and A.I.-first approaches; and healthcare data.
Raw material: More and more researchers—both at Microsoft and through its extensive network of academic and industry collaborations—are “motivated by problems in healthcare,” Lee says.
“Digital hospitals of the future”: The growing health system attached to the University of Pittsburgh, UPMC, announced last week a $2 billion investment to build three new specialty care hospitals focused on transplants, cancer, and vision and rehabilitation, with Microsoft collaborating on technology design. Microsoft Healthcare NExT was already working with UPMC. Lee, a former Carnegie Mellon University computer science professor and department head, says UPMC makes for an interesting partner because it is doing its own research and development in natural language processing, machine learning, and other A.I. technologies—an effort led by one of Lee’s former students at CMU.
One goal of the partnership is to use A.I. technologies to relieve some of the burden on overworked healthcare providers, who, by some estimates spend 100 minutes a day entering data into forms, Lee says. He won’t reveal specifics, but says there’s great potential to use A.I. and cloud technologies “to enable more personal and more relaxed interactions between clinicians and their patients.”
Beyond that, it’s still early in the process of designing these modern UPMC hospitals. “It’s a very unique opportunity,” Lee says. “It’s scary, too. It’s a little daunting to confront a clean sheet of paper, but there is a lot of optimism right now just on the basis of the early collaborations we’ve been working on.”
InnerEye: When radiologists make a therapy plan for a cancer patient, they must manually click through images to identify and measure different anatomy. Lee says the experimental Microsoft product InnerEye combines state-of-the-art machine learning and computer vision technologies to automate that process. The system learns by observing the habits of radiologists and codifying them. “When you observe what good radiologists do, there’s some artistry, but in the grand scheme of the workflow, the artistry comes in a very small amount of time, and it’s then buried in hours of incredibly mundane just tapping on pixels,” Lee says. “And so the opportunity here is, through machine learning, to understand the workflow, stay out of the way of the creative part of it, and then automate away the mundane parts.”
Researchers are working closely with clinicians on the system, and Microsoft plans to present it during a plenary session at RSNA, the top medical conference for radiology, later this month. [Paragraph updated to clarify that InnerEye is not currently in clinical trials.]
Why radiology is ripe for AI: “We’ve just had this amazing set of advances in computer vision,” Lee says. Also, computer scientists are starting to understand the radiologist’s workflow, which is creating a level of comfort and acceptance within the medical specialty. “They know that their own creative input isn’t being overridden by machine intelligence. Machine intelligence is augmenting what they do,” Lee says. “So there’s a popular misconception that A.I. will replace radiologists. I don’t think that’s coming. It’s certainly not true today.”
Chatbots: Microsoft’s Bot Framework, including its suite of machine learning models covering speech and language processing, entity extraction, and other cognitive services, is finding applications in healthcare. But in order to build voice-enabled or conversational services that would deal directly with protected patient health information, the underlying technology must be compliant with the Health Insurance Portability and Accountability Act (HIPAA).
Lee says Microsoft’s health bot is HIPAA compliant, and partners are building applications such as a self-service medical triage bot. It engages in a conversation to help ascertain symptoms and then connects the patient to the right human clinician, who is provided with a concise triage report, Lee says. Other applications include answering health insurance benefits questions and connecting patients to clinical trials that may be appropriate for them.
Data sharing: The least exciting but perhaps most important part of the Healthcare NExT portfolio involves healthcare data. Lee is convinced that A.I. technologies are ready to make real, positive impacts in healthcare. He cites a project in India that applied machine learning models to a huge trove of image data from eye exams. Those models can now aide in early detection and tracking of myopia, a disease that leads to blindness, but can be reversed with early intervention. It’s still early, but this effort could prevent blindness “in potentially millions of children,” Lee says.
While technology and healthcare companies have been eager to strike data deals, strategic, regulatory, and technological issues limit the amount of usable data available to researchers developing new healthcare A.I. applications, Lee says. Some amount of privacy regulation is “absolutely necessary,” but within those confines, he sees the opportunity to share more.
“I’d love to expose all the health data to all of our researchers here at Microsoft,” he says. “I’d love to expose that data to all of the innovative startups on Azure right now, but to get to that point has been very hard. You’re going through a funnel, and by the time you get down to the bottom, you have a very small number of those initial data deals that actually turn into data that can fuel an innovation ecosystem.”
Lee says policy and regulatory work surrounding existing data is progressing, but it isn’t enough. New forms of healthcare data—such as that produced by connected medical devices—is proliferating, often without the structures and policies in place to enable sharing and use for A.I.- or cloud-based applications, Lee says. Even organizations that want to share data for research purposes often don’t know how to do it, he adds.
Microsoft is developing what Lee describes as a “standard operating procedure” for healthcare data sharing. “What I think is missing is guidance for individual organizations and for consortia of organizations that see a value in migrating and sharing and standardizing data,” Lee says. “So, what are the step by step instructions for doing this that will keep you out of trouble from a compliance standpoint, and put you in direct contact with a larger community of innovators?”
“Double-edged”: In computer science today, “there’s much more thought given to the fundamental fact that all technologies are double-edged. I think when I started in computing, I had, personally, the ethos that technology is good, more technology is better. And I think that ethos fueled all of the optimism in the tech industry for a long time. I think it was never true because there are always bad uses of technology, there are always unintended consequences that could be positive or negative, from any technology. It takes real thoughtfulness and intentionality to stay on top of that and maximize the good, minimize the bad.”
That’s especially true in healthcare applications, he says.
Moreover, computer scientists applying A.I. to healthcare must respect the traditions of medicine. “Medical science has been very disciplined about having an experimental side of things that is statistically correct, and not fooled by correlations,” Lee says. “It really tries to understand causation. And this is where the tradition of randomized trials and having clarity about controls comes into play. When we’re talking about applying today’s machine learning in the medical space, by and large, we’re doing various forms of correlation detection, various kinds of classification, ranking, and other kinds of pattern recognition. So, there’s always a danger of getting fooled by a correlation that may not actually lead to a valid causal influence.”
It can be tempting, Lee says, “to draw firm causal conclusions from the patterns that we’ll be able to pluck out of very large amounts of data.” A.I. researchers must remain disciplined, he says.
And they must be aware of potential biases in data used to train learning systems. Examples of this are proliferating in other applications, and researchers should proceed with caution and humility, Lee says.
“This is all uncharted territory for us,” he says. “We have to go there because the positive potential for human health is so enormous, but at least speaking for Microsoft, we are under no illusion that we understand all of the issues that we’ll need to confront and solve there.”