Complete our short form to continue

Genestack will process your personal data in accordance to its privacy policy which can be found here. This includes sending you updates by email about our products and content we think it would be of interest to you. You can unsubscribe at any time by clicking the link in the footer of any email we send. By clicking submit you agree that we process your information in accordance with these terms.
Clinical, Opinion, Pharma R&D

Driving precision medicine and the evolution of clinical data management

09.04.21

The last three decades have transformed how technologies that drive research — and as an example, clinical trials — have evolved, particularly when we consider that as recently as the nineties clinical trials were still a largely paper based process.

In this article, we reflect upon how the industry evolved and how these changes led to paradigm shifts. It is interesting to see how the emergence and growth of clinical technologies can be considered over different periods of time (or, as “ages”) and have influenced the way data management for research and clinical development within pharma in general has evolved.

The age of evolution — mid nineties to circa 2005 

The nineties were the age of evolution of clinical technologies. A lot was of course driven by the emergence of internet capabilities during the turn of the century. Data collection and management underwent a paradigm shift. This evolutionary stage saw us move from paper to electronic, and as Electronic Data Capture (EDC) systems became more configurable, queries began to be resolved in real time, batch validation became a thing of the past and Data Clarification Forms (DCFs) went down in history.

It’s only when we reflect on the profound impact of these changes that we realize how significant this period was for the way we rely on clinical technologies today. Five years into the 21st century, thin client based EDC systems became the mainstay of data capture in trials. The rest is of course — history. Data capture and management weren’t the only areas touched in this age of evolution. Excel sheets gave way to Clinical Trial Management Systems (CTMS), and the power of the web helped IVRS systems evolve into more configurable and connected web based RTSM systems, just as examples across the landscape of clinical technologies.

The age of specialization — 2005 to circa 2015

From around 2005 onwards, we saw technology providers becoming more and more specialized in their areas — whether it was Safety Management, Randomization Trial Supply Management (RTSM), or Clinical Trial Management Systems (CTMS); Electronic Data Capture (EDC) or ePRO — we saw niche leaders emerge. We saw systems within these niche areas becoming really good at what they do. Looking across the landscape — we saw clinical technology products become increasingly specialized and highly configurable.

Specialization drove the need for harmonization and standards, which gave impetus to initiatives such as CDISC, Phuse, and the Pistolia Alliance. Data harmonization led to standards like CDASH, with clinical data structure models like SDTM becoming the norm. This specialization of clinical systems within their niche, led to another need — the need for systems to talk to each other, and the need to exchange data, and form a connected ecosystem.

The development of the Operational Data Model led to APIs being the new buzzword of the industry. The need to integrate discrete systems led to sharing of API libraries, and companies published their APIs, and real time querying, posting and request of data between systems became the big ask. The age of specialization had so much of an impact, that we can see its influence within three broad areas - high configurability of clinical technology systems, the development of common standards, and last and most importantly, the need to integrate.

The age of convergence - today

Today we are within the threshold of another transformation occurring within the industry — one where the barriers between research, and clinical development are becoming greyer, with pharma beginning to recognize the value of their existing data assets and using these to empower future research.

“The way we currently operate leaves data in a state that requires months to find, access and pool together for answering our important research questions. This is a valuable time in our patients' lives that we are wasting simply because we do not pay our data assets the careful attention they deserve when we create and preserve them”. Roche Group CEO Dr. Severin Schwan(1)

The big driver for this is of course cost —with the cost of bringing a new drug to the market hovering around $2.6 billion, precision medicine techniques — drawing on the large scale availability of genomic and Life Science Data and improving scientific outcomes using this, hold the promise of bringing this cost down. The challenge here is the need to consolidate and manage and analyze scientific data from all sources and drawing actionable insights from that.

Genestack has a core specialization in working with large scale, Life Science Data — the fundamental challenge for all data ingestion / consolidation systems across all life sciences sectors, whether it is making decisions based upon genomic information linked to inflight studies or for retrospective research or biomarker discovery.

To learn how Genestack's flagship software Genestack ODM (ODM) can support your clinical research and overcome challenges in managing genomic information in your inflight studies, or if you would like to see how we can make a difference to the way you manage and handle Life Science Data in your organization, contact us.

Get in touch >

Related:

> Building a more integrated and scalable Life Science Data landscape

> Genestack signs multi-year agreement with AstraZeneca to implement Genestack’s Genestack ODM

 

 

 

09.04.21

Sign up for our newsletter