
How AI and Machine Learning can help healthcare

What is Artificial Intelligence (AI)?
If you’re a journalist or a venture capitalist these days, it is almost impossible to avoid the words “artificial” and “intelligence” uttered with great certainty and ultimate decisiveness in pretty much every single pitch. But what is it?
It’s often helpful to think about artificial from two different angles: Applied AI and general AI.
General Artificial Intelligence
A general AI is what most people think about when they think of artificial intelligence. In theory, this is a computer that can do almost anything, much like a human being. In practice, yes, that is the general idea, but it’s often disappointing. Alexa and Siri are probably the two best known general AIs — they work as personal assistants, and are very happy to help you with lots of things. But if you’re a heavy Siri user, you’ll probably quickly have noticed that she has some obvious limits.
For example, ask any human being living in the US “Who was the president of the U.S. last year,” and chances are pretty good that they’ll be able to give the correct answer. You would also assume that Siri, given that she has the power of the entire internet at her beck and call, would be able to come up with an accurate answer. But let’s try it…

This is what I mean when I say that general AI still has a long way to go. Siri was able to come up with an answer for me, but it involves clicking through to a Wikipedia article with a lot of information, when I only asked for a specific piece of information. Also: Who uses Bing? Come on, Siri…
General AIs are incredibly hard: You have to anticipate… well… everything. And even though computers are pretty good at connecting bits of data, AIs aren’t as advanced yet that they would be confused for humans.
To take the above example: If you ask your mother who was president when your sister was born, you will probably get a meaningful answer. But consider all the data that needs to be considered: Who asked the question, who is their sister, when were they born, and who was president when that happened? Humans do this sort of thing at incredible speed. Computers are getting there, but… Not perfectly yet.
Applied AI
If general AI is aimed at doing everything, applied AI is sort of the opposite. An applied AI can only do one thing, but they are expected to do that one thing extremely well.

An example is self-driving cars. You wouldn’t expect your car to know who your sister is or who was president when your sister was born. But you would expect it to get you safely where you were going without running into any cars, pedestrians, or walls.
An applied AI has the luxury of only having to do one set of things very well, and it can ignore everything else. It means that applied AIs are very good at certain things (your bank’s AI will be extremely good at figuring out whether a transaction on your credit card is likely to be fraudulent) and rather poor at others (I don’t know about your bank, but my Citibank app can’t make a bacon sandwich to save its life).
Different types of machine learning
I hope the above examples gave you a couple of ideas for what machine learning can be used for (I’ll get into how it’s used in medical context in just a bit) — but how does it work? What is it?

You are probably exposed to machine learning on a daily basis.
Clustering: “A and C go together”
When you use Netflix or Spotify, for example, the algorithms try to recommend movies you like. The way it does this is to observe what you are watching and listening to. If you have given ratings, that is helping it too.

This process is known as clustering. It looks at the list of movies you like, and then looks for other users who have a similar list, but that also includes films you haven’t seen yet. The algorithm can then recommend movies it thinks you will like, because other users have also liked those films.
It is possible to make clustering algorithms that aren’t machine-learning, too: they operate on very large data sets, but they don’t learn.
It’s easy to see how this would be relevant in healthcare, too: Medications used and symptoms observed often appear in clusters, and leveraging machine learning could (and does) help improve prescribing, diagnosis and research.
Back-propagation
Another technique used in machine learning is back-propagation. That’s a fancy way of saying “take what you’ve learned and feed it back into the algorithm”.

To continue with the example above: Imagine that you are recommended a film that you really hate. You watch half of it, and you think it’s terrible. So you give it a thumbs up and move on with your life. What is a poor AI to do with that piece of data? The AI thought you would enjoy a movie, but you didn’t, so there is clearly something ‘wrong’. Of course, there are many reasons you might dislike a movie (maybe you don’t like rom-coms, maybe you hate black and white films, maybe you can’t stand a particular actor, or perhaps you just don’t like the use of coarse language), so your one data point isn’t that helpful. The AI knows you don’t like the film, but it doesn’t know why you don’t like the film. It can, however, use this as one data point: you disliking this film is something unique to you (if it weren’t, Netflix wouldn’t have recommended the film to you). However, if there is someone else out there who likes similar films to you and also dislikes this particular film, maybe that’s a pattern. If it turns out that this is part of a bigger pattern, perhaps the machine-learning algorithm should adjust what films it recommends to everyone.
Another example of this technology is high-frequency trading. This is when Wall Street uses computers to do thousands and thousands of trades in a short span of time. It starts with a theory for how to trade, and then starts trying it out. A simplified way of doing this could work like this: The trader creates an algorithm with three small variations (A, B, and C), and each variation starts investing $30,000. Each algorithm might stop after it has earned or lost $10,000. The algorithm that did the worst is killed off. The two algorithms that did the best are combined into one, and two new variations are created, based on the best-performing algorithms. Then the cycle starts again. This can be done fully automatically, and the decisions (which algorithm lives, which is stopped) are made based on real outcomes (earning / losing money). This is back-propagation in action: The outcomes are fed back into the algorithm (i.e. ‘machine learning’) to make the AI trading bot better.
Pattern analysis
If you combine the two techniques above, you end up with a more advanced version still. A good way to think about this is to consider fraud prevention. If you get a new card from a new bank, your bank doesn’t know anything about you, and so isn’t able to be that helpful. It applies a basic algorithm to try to help you out, but it will often be wrong. This is why your card gets blocked more often early on.
As you spend your money on your credit card, patterns start to emerge. If you mostly use your card at grocery stores and the odd bar in New York, it would be very strange if you suddenly bought a television in Germany. It is natural to believe that this is fraudulent, and the payment is blocked.
The bank has a lot more data than just your transactions: It also has thousands of other customers. When you start spending your money at Starbucks and the corner bar, the bank would be smart to use clustering algorithms. Much like recommending a movie to you, it can compare your spending habits to those of others. So, if you suddenly buy a television in a Best Buy near where you do most of your shopping, the bank might realize that people like you sometimes buy a television at Best Buy, and let the transaction go through.

The bank can also learn from other things. If you are a traveling salesman, maybe you do spend $40 on meals in places all over the country. Maybe you do spend $2,000 on business-class flights to Europe. And maybe you do sometimes put down a $10,000 deposit on a fancy sports car. However, for other customers, any one of those transactions would be suspicious, and would be flagged. The flipside is also true: Maybe our traveling salesman would never spend $3 in a grocery store on their card, so if that starts happening, maybe that is a sign that the card has been stolen.
Ultimately, the more you use your card, the more your bank knows about you, and the more accurate the fraud prevention technology gets. This is true both for you as an individual (the bank knows your habits) and for all of the bank’s customers collectively (the clustering algorithms become smarter).
Neural Networks
The cutting edge of machine learning is neural networks. In a nutshell: Neural networks is computer software programmed to ‘think’ more like the human brain. This is technology that is used in a wide number of applications, including facial- and image-recognition.
Imagine if you show a child who has never seen an apple before photos of 100 apples. They come in all different shapes, sizes, and colors. But ultimately, the child will still be able to recognize the 101th apple, even if it is of a type they’ve never seen before. This is an advanced type of visual pattern recognition that the human brain is spectacularly good at. And machines are getting eerily good at it, too.
Combine a neural network with back-propagation, and you have a really powerful thing indeed. Imagine you show a computer 10,000 photos of all different types of fruit, all carefully categorized as apples, pears, bananas, etc. Now, show it a photo of a banana it has never seen before. If the computer identifies it as a banana, give it a thumbs up. Not only does the computer feel better about itself (just kidding, computers have no feelings… yet), but it will also use this data point to improve its computer vision.
To get a feeling for how incredibly good computer vision has gotten, upload any photo into Microsoft Azure’s Computer Vision API.
The above video is taking a look at how computer vision is changing the world. 8 minutes long, and utterly fascinating.
How AI / ML is used in healthcare
Artificial intelligence is already used in healthcare in lots of different ways.

A basic place to start imagining use of AI is for staffing. Everyone who’s ever worked in an emergency room knows that Fridays are busy (people drinking and doing dumb things), and Saturdays are busy, too (people drinking, doing DIY, and doing dumb things). You don’t need an AI for that — but what if there was a correlation between payday and hospitalizations? Hot or cold weather and doctor’s visits? Whether or not the local sports team is on a winning spree?
By plugging a lot of data sources into a computer, it’s possible to start spotting some of these trends, and then planning accordingly: Maybe it is prudent to schedule in an extra triage nurse before the local team plays in the superbowl, just in case…

The first step towards big data analysis is having good electronic records. Of course, as with any other system, it’s a question of garbage in = garbage out… But at least now that we are starting to transition to electronic health records (EHR), there’s a fighting chance to start spotting trends and correlations.

Diagnostics is where some of the most exciting developments are taking place, in my opinion.
A recent paper from Stanford shows that computers are as good as trained dermatologists at identifying skin cancer — and of course technologists are already talking about implementing the learnings as a smartphone app, too.
Computers are already in use for analyzing breast and heart imaging, and are getting similar results to the very best doctors, Siddharta Mukherjee explores in this New Yorker article. The best doctors in the world are still better than machines. However: Not everybody has access to the best doctors in the world. Besides, machines are learning awfully fast, and can be duplicated a lot easier than doctors. And even if you are the best doctor in the world, it might be just a little bit arrogant not to take a second opinion from a machine, just in case.

Another place where big data and machine learning is starting to be interesting is in the so-called Quantified Self movement. A lot of people (myself included) are starting to wear step counters, for example.
I’ll be the first to admit that step counters aren’t accurate — 10,000 steps doesn’t necessarily mean that you literally took 10,000 steps. In addition, there is variation between the manufacturers (my Apple Watch gives a different step count than the Fitbit — usually around 10% fewer steps), and even between individual units (I once wore two fitbits on the same wrist. The result should have been identical, but wasn’t.)
So if you’re looking for clinical precision, perhaps fitness trackers aren’t it. However, that doesn’t mean that the data isn’t helpful. The absolute numbers reported by a Fitbit might not be accurate, but the relative data might very well be. I know from personal experience that there is a 100% perfect correlation between me getting more than 12,000 steps, and me sleeping well at night. Does that mean that I took 12,000 steps, literally? No. Do I sleep poorly when I’ve only taken 3,000 steps in a day? Definitely.
There are tools for tracking steps, tracking sleep, apps for tracking intake of nutrition, alcohol and coffee, and smart bathroom scales that you can use to track your weight over time. There’s a tremendous amount of data out there — almost none of which are being accessed by my medical providers. Perhaps that makes sense — after all, none of these devices are FDA approved, or tested to ensure any sort of accuracy. But I can’t help but wonder whether the aggregate data has value from a medical point of view.
After all, if a company like Jawbone can measure how far away from an epicenter people were woken up by an earth quake, there has to be value in the data from millions of people tracking their every move. I look forward to the day that my doctor asks for read-access to my health data — if it’s connected to the data of tens of thousands of other patients, and results in better healthcare for me and others, I’m all for it.
When writing this, Haje Jan Kamps was the CEO at LifeFolder, a company that was helping people think about and plan for end of life using chatbots.