What processes need to change at an institutional level to enable data collaboration at scale?

When it comes to unlocking the value of healthcare data, the issue is not the amount of data available; it is how it’s collected, curated and analyzed that determines its full value. Although healthcare data is growing rapidly at 48% YoY, according to an annual study from IDC, the percentage of that data used in decision-making is only 5-8%.. The global COVID-19 pandemic has illuminated important gaps for the healthcare industry — with underuse of data as one of the most glaring. Pandemic-born consortiums have championed this idea of data sharing to find solutions faster and at scale so that we’re prioritizing the right data at the right time to have the greatest impact on patient outcomes.

How do we capitalize on the momentum in data collaboration brought on with COVID-19?

  • Create an environment where the data is locally curated and contextualized to add information beyond what is captured in the care of the patient
  • Leverage this process to build data assets that are built to solve specific use cases
  • Capitalize on this process and resultant data across organizations to solve larger public health problems

Building a better data asset

Breaking down data silos and creating an ecosystem that allows for all stakeholders to work collectively is no longer a nice-to-have. To realize that vision, we must understand that data collected for a specific purpose (e.g. care of the patient) may not be specific enough to solve research questions. The data may need additional curation and contextualization to support broader use cases. This is very apparent in how researchers work today. They extract data from the EHR for a research project, but then curate and add context to the data to support their use case. This additional information can be in the form of data abstraction, mapping concepts to vocabularies, or capturing other interpretations of that data. Traditionally this work and knowledge has been locked in that researcher’s data, never contributing to the rest of the research enterprise. By bringing that knowledge to surface, across the entire organization, it can power new and more efficient research for the entire enterprise.

In practice this requires developing infrastructure, methods, and tools for collaborating on these data assets. Syntropy’s work with the University of California Irvine is an example of this, where the context and insights that one researcher adds are available to next researchers who will use the same data later. By iterating on this process, supporting researchers as they make the data more usable for their research, they are contributing to an ever-growing and increasingly valuable resource for the community of researchers at large. Work that we have done at UCI to capture COVID-19 symptoms and comorbidities is available for the next researcher that needs the same data. This work is not novel by any means. Researchers do it all the time. What is novel is that this additional knowledge is not lost, it is available for other researchers to leverage.

As we enable and foster this environment where patient data can be accessed across the research and care delivery organization, we set a new precedent for how data is curated, contextualized, and leveraged to develop new insights.

Enabling collaboration at scale

Similar to the challenges academic researchers face, much of the data that industry researchers need for their use cases are not captured fully in the course of caring for patients. Data may be partially captured but in an unstructured form (reason for changing therapy, status of disease) or not captured at all. Because the organization makes the curated and contextualized data available for everyone, it is readily available for external collaborations as well. Another key to solving industry’s use cases is enabling a dialogue between the data source and the data collaborator to discuss what is needed. This allows the data source to refine their curation specific to the use case at hand, creating a shorter path to a quality data set for the collaborator. This is a superior method to outsourcing data curation to those that don’t have local context, while keeping the data collaborator at an arm’s length from the data source.

The long-term vision is to build out this process at multiple organizations so that researchers can collaborate across those organizations more easily, taking data from them and using that combined data set for more broad analysis.

Technology offers the promise to unite academic medical centers and life sciences organizations across a holistic data infrastructure that introduces a new working model where incentives are aligned to improve research efficiency, innovative insights and ultimately patient outcomes. Changing the way academic medical centers curate data will result in improved models for real-world evidence and facilitate next-generation research. Operating with this intention will allow enterprises such as UCI to scale and integrate data to transform our understanding of disease and care pathways. To learn more about how Syntropy can help you create an environment where the data can be accessed by all stakeholders, visit syntropy.com.

Source: https://www.statnews.com/sponsor/2021/11/26/what-processes-need-to-change-at-an-institutional-level-to-enable-data-collaboration-at-scale/