It not very often that a technology comes along with such potential to radically improve our lives but at the same time force unprecedented and unwanted changes onto our world.
The most pressing concern in my opinion is not AI exceeding our intelligence all that may come with this. It is what will happen first, along the journey to such broad deployments of AI.
Before we ask AI to run the planet, we’re likely to ask it to augment or replace the jobs of our doctors, lawyers, journalists, receptionists and financial advisors. These advances are just around the corner with a host of well funded companies fighting it out to provide solutions for radiology or to support lawyers for example.
Here at Skin Analytics we’re putting the finishing touches to our clinical study running in 6 NHS hospital trusts, to prove over more than 1,000 patient that our AI based system can identify melanoma as accurately as a dermatologist.
If we assume for the moment that all these solutions work (I’ll come back to this in a moment) herein lays the most pressing concern around AI in my opinion:
We are rapidly and dramatically changing a very large number of jobs and the social pressure that results will be greater than the last time we did it – during the Industrial Revolution.
How we manage this unprecedented change in social structure will be the key to the fabric of our world over the next 20 years.
Increasingly this is becoming a focus with IBM identifying it as one of its four research pillars in a tie up with MIT.
Ultimately, we will need to focus more on this problem and soon.
However, while I strongly believe in AI, I don’t believe that we’re careering to this brave new world at breakneck speed. Focusing on healthcare, we’re right at the top of the hype cycle now and there are a few twists in the road yet.
There are amazing achievements being made daily in the world of AI. But underlying it all is a very real risk that we’re getting the answers we want, rather than the real ones.
The biggest challenge to developing AI solutions for healthcare
Why do I say that? Well, bear with me as I explain what we see as the biggest challenge to developing AI solutions for healthcare.
The majority of state-of-the-art machine learning algorithms are based on a powerful technology called deep learning. Deep learning differs from more traditional machine learning in that the features used to classify data are learnt directly from the data itself, rather than depending on hand-coded features. In this way, vastly complex non-linear models can be learnt, which describe the data in such a way that is impossible to code by hand.
Still with me?
Ok, so outsourcing the identification of the relevant features is hugely powerful given we are still researching them ourselves in may cases. The risk is that there is always a risk that the system identifies features that aren’t really indicative of the result. When this happens the algorithm performs well on one dataset, but fails to achieve the same performance on new, unseen data.
We call this a lack of generalisation and it’s a big problem that many don’t know they have.
This is especially apparent in medical-based machine learning applications, where differences in capture methods, demographic, and other factors, can bias the model learnt. To give an oversimplified example, say we had a tiny little pen mark the clinician has used to remember which mole is melanoma. The algorithm may see that and think “oh right, when I see a blue dot near a lesion, that is a melanoma”. Which may be true in your training set but is certainly not true in the real world.
The only way to make sure that you’re not over fitting to your data and that your solution generalises is to evaluate it properly. In healthcare, that means a well designed clinical study.
Sounds expensive right? It is. And hard to do. Hospitals are under extreme pressure and while incentivised to research, it is with large multinationals, not small startups where the innovation is happening.
Unfortunately, there is no way around the need to properly evaluate your technology with the medical community.
Anything less and you’re hurting patients in two ways. Firstly by directly putting them at risk and secondly by contributing to the inevitable backlash as AI moves through the hype cycle.
We’re so passionate about this, we joined several other start ups in calling for a set of standards for AI in Healthcare.
But done right, AI’s role in healthcare will deliver better care for less money and enable our brilliant physicians to focus on the more complex cases to build better understanding so that we can build solutions to those problems as well.
We can democratise access to the best quality healthcare and unleash the talents of the entire world to think about humanity’s biggest challenges. And in that way, I believe we can make the most of AI.