DEV Community

Cover image for The Problem With Machine Learning In Healthcare
SeattleDataGuy
SeattleDataGuy

Posted on

The Problem With Machine Learning In Healthcare

By Ben Rogojan

An article from the Wall Street Journal has been floating around online recently, discussing how models will run the world. I believe there's a lot of truth in that. Machine learning algorithms and models are becoming ubiquitous and are increasingly trusted across industries. This, in turn, will lead us to spend less time questioning the output of these algorithms and simply let the system give us the answer. We already rely on companies like Google, Facebook, and Amazon to inform us of ideas for dates, friend's birthdays and what the best products are. Some of us don't even think twice when it comes to the answers we receive from these companies.

As a data engineer who works in healthcare, this is both exciting and terrifying. Over the past year and a half, I have spent my time developing several products to help healthcare professionals make better decisions, specifically targeting healthcare quality, fraud, and drug misuse.

As I was working on the various metrics and algorithms I was constantly asking myself a few questions:

How will this influence the patient treatment?

How will it influence the doctor's decision?

Will this improve the long-term health of a population?

In my mind, most hospitals are run like businesses, but there is some hope that their goal isn't always just the bottom line. My hope is that they are trying to serve their patients and communities first. If that is the case, then the algorithms and models we build can't just be focused on the bottom line (as they often are in other industries). Instead, they need to consider how things will impact the patient, how they may impact their overall health, how this metric could change the behavior of a doctor, in a potentially negative way.

For instance, the Washington Health Alliance, which does a great job at reporting on various methods to help improve healthcare from a cost as well as a care perspective, wrote a report focused on improving healthcare cost by reducing wasteful procedures. That's a great idea!

In fact, I worked on a similar project, which is when I started to think. What happens when some doctors take and over-adjust? I am sure many doctors will appropriately recalibrate their processes. However, what about the ones that try to over adjust?

What happens when some doctors try to correct their behavior too much and cause more harm than good because they don't want to be flagged as wasteful?

Could we possibly cause doctors to miss out on obvious diagnoses because they are so concerned about costing the hospital and patients too much money? Or worse, perhaps they rely too strongly on their models to diagnosis for them in the future. I know I have over-adjusted my behaviors in the past when I was given criticism, so what is to stop a doctor from doing the same? There is a fine line between allowing a human to make a good decision and forcing them to rely on the thinking of a machine (like Google Maps --- how many of us actually remember to get anywhere?).

But are you thinking because you were told to think...or because you know what you are doing?

There is a risk of focusing more on the numbers and less on what the patients are actually saying.

Doctors focusing too much on the numbers and not on the patient is a personal concern of mine.

If a model is wrong for a company that is selling dress shirts or toasters, that means missing out on a sale and missing a quarterly goal. A model being wrong in healthcare could mean someone dies or isn't properly treated.

So as flashy as it can be to create systems that help us better make decisions, I do wonder if humans have the discipline to not rely on them for the final say.

As healthcare professionals and data specialists, we have an obligation not just to help our company, but to consider the patient. We need to not just be data driven but be human-driven.

We might not be nurses and doctors, but the tools we create now and in the future will directly influence decisions made by nurse and doctors. We need to take that into account. As data engineers, data scientists and machine learning engineers, we have the ability to make tools to amplify the abilities of the medical professionals we support. We can make a huge impact.

I agree, models will slowly start to run our world more and more (they already do in areas such as trade, some medical diagnoses, purchasing at Amazon and more). This means we need to think through all the operational scenarios and consider all the possible outcomes --- both good and bad.

Want to read more great posts about data science and programming?

Hadoop Vs Relational Database
How Algorithms Can Become Unethical and Biased
Using BigQuery To Analyze Healthcare Data
Top 10 Business Intelligence (BI) Implementation Tips​
5 Great Big Data Tools For The Future - From Hadoop To Cassandra
Creating 3D Printed WiFi Access QR Codes with Python
The Interview Study Guide For Data Engineers

Image from:Photo by Piron Guillaume on Unsplash

Top comments (1)

Collapse
 
kiranpatel_tech profile image
Kiran Patel

I couldn't agree more. There is also the problem of data. Big data in healthcare is today a commodity that has been compared in value and influence to oil. Big data can be harnessed by AI-driven, automated machine learning, where computers spot trends which can be analyzed and used to great effect by the organization monitoring the process. It can be invaluable in the medical field providing excellent analysis to spot trends to widespread for human capacities. But, in order to do that one must gather the data and if there’s a form of data which people are anxious not to have spread around the world recklessly, it is personal medical records. And already I’ve seen it happen causing serious doubts over the ethical issue of data privacy. The solution is to tightly regulate and to try and encrypt the connection between the data itself and the details of who it belongs to so that its potential loss causes no breach of privacy.

This article has more on this issue of AI, ML and deep learning in smart healthcare - weblineindia.com/blog/smart-health...