Dec 31, 2021

Data Quality Issues Plague the US Health Care System

Oleg Bess, MD, explains why data quality is essential to improving the United States health care system. This article was originally published by OncLive. Click here to see article.

Data quality is essential to improving the United States health care system. But what is data quality? Basically, it is about extracting value for patients, clinicians, and payers. High-quality data are both usable and actionable, whereas low-quality data, such as duplicate records, missing patient names, or obsolete information, create barriers to care delivery and billing/payment issues. These inefficiencies result in monetary losses across the health care system.

Unfortunately, health care has lagged virtually every other industry in leveraging data for strategic and operational advantage. This is not for lack of data—more patient health data are being captured today by medical equipment, digital devices, and apps than ever before. If data are gold, the health care industry is sitting on top of a gold mine.

In general, the industry has done a poor job of mining and refining this gold ore of data to the point of being useful. This is a wasted opportunity because quality data would provide value to all health care stakeholders, including hospitals, payers, health information exchanges, laboratories, and patients.

Effect on Research, Patient Outcomes

Poor data quality have negative ramifications throughout health care. The way we bring new medications to market is a great case in point. The first phase is identifying and developing the medication, a process that comprises less than 20% of the cost of drug development. Next comes clinical trials, which account for most drug development costs.

Clinical trials are long and arduous. Worse, from a data quality standpoint, data still are collected on paper, transferred to computer, and typically not associated with other data sets for similar types of patients or even other patients in that clinical trial. If all this information was easily available and automated, clinical trials would cost less than 10% of what they do now.

In clinical practice, there is a significant lack of aggregated data for patients, leaving clinicians knowing only part of a patient’s story. Further, when a physician sees a patient at the hospital, they need the patient’s ambulatory outpatient records to make sound, evidence-based decisions about appropriate treatment. In far too many cases, however, the data that would inform clinicians at the point of care are trapped in siloes scattered across the health care landscape.

Health care has not availed itself of advanced digital technologies such as artificial intelligence (AI) and machine learning, which are being used to transform many other industries. This is in large part because these tools are either not accessible or the quality of health data is so poor that intelligent machines would struggle to process and analyze it for actionable insights into patient care.

In contrast, access to quality health care data would create scenarios in which AI and machine learning can quickly provide clinicians with information at the point of care that enables them to educate patients about their specific condition, offer referrals to appropriate specialists, suggest new medications, and improve outcomes. With the right data and insights, clinicians may be able to match the patient with a clinical trial that could save the patient’s life.

Low-quality data are also a problem for payers because they need data to make decisions, regardless of the condition. Data collection starts when a patient receives a diagnosis, yet the payer may not be aware of it for weeks. So, for example, a patient who receives a diagnosis of cancer may suddenly have to find a specialist and determine whether the practice takes their insurance.

However, if a payer finds out right away that this patient has cancer, the payer can help guide the patient to the right provider, alleviating some of the burden of financial concerns related to medical treatment.

Improving Quality Data

There are 3 main steps to achieving high-quality data. The first is ensuring access to the data the clinician needs. There are still many legacy systems in health care, which house most medical records on an organization’s servers and not on a cloud platform, where they could be more easily accessed and aggregated by authorized users. Although interoperability and data sharing have improved in the health care setting in recent years, integrations are being built one at a time. Bringing data into a central database—where that information can be turned into gold—remains a challenge.

The next step involves identity management. Much of the value in health data comes from the ability to document longitudinal change in individual patients. Clinicians may, for example, want to see how medications affect lab results for a particular patient. Even if they could connect disparate data sources and have the data funnel into a single database, they cannot associate the data with a specific patient. With effective identity management, it is possible to create a longitudinal patient record that can bring huge clinical and efficiency benefits.

The third phase of improving data quality is to make the data usable or actionable. Once the longitudinal record of accurate patient data is created, it must be organized and easy for clinicians to locate and read. This requires a process called data normalization. Health data can come from multiple sources (ie, electronic health records, laboratories, pharmacy systems, etc), all of which may use different coding for a medical procedure, different terms for a certain test, or even different language to categorize genders. Data normalization creates a common terminology that enables the semantic interoperability necessary to make data actionable.

There’s no magic bullet for improving health care data quality. It will require a joint effort and the innovation inherent in the free market. There will be companies that help us collect and connect to data, and there will be companies and entities that help us connect to data. There will be technologies that identify the data and others that will normalize the data, and there will be companies that provide quality databases that health care stakeholders can use to perform advanced analytics. The end result will be a health care system that provides better care at a lower cost.

Oleg Bess, MD, is CEO and cofounder of 4medica, which provides clinical data management and health care interoperability software and services.

4medica Can Clean Your Patient Data Records. We Guarantee a 1% Duplication Rate or Less!

Talk With An Expert About Our Health Data Quality Solutions

4Medica in the news and Industry publications