Complete our short form to continue

Genestack will process your personal data in accordance to its privacy policy which can be found here. This includes sending you updates by email about our products and content we think it would be of interest to you. You can unsubscribe at any time by clicking the link in the footer of any email we send. By clicking submit you agree that we process your information in accordance with these terms.
Conference, Pharma R&D

P4 medicine: The big data technologies and strategies at Molecular Medicine Tri-Con 2018

27.02.18

 

The 25th Molecular Medicine Tri Conference 2018 kicked off on Monday 12th February and is one of the leading conferences in the drug discovery, development and diagnostics field.

With a over 3000 attendees and 16 parallel conference tracks there was a great opportunity to hear key thought leaders and innovation thinkers talk about advances being made in the field. 

Leroy Hood, a systems biology pioneer, gave the opening keynote lecture on how developing technologies and the generation of large volumes of data can be used in what he calls 'P4 medicine'; personalised, predictive, preventative and participatory medicine. Hood described the large scale wellness program he has launched which aims to capture data across genomics, proteomics, wearable device data, metabolomics and more, to monitor wellness and disease. The pilot study of 108 healthy individuals indicated that in fact, 100% of these healthy had actionable clinical findings that could lead to improved health and reduce risk for specific diseases. The discovery of biomarkers at the wellness to disease transition will transform industries allied to healthcare including the pharma industries. In closing, Hood predicted that with the digitisation of medicine will lead to reducing the rising costs in healthcare.

The Tri-Con track, 'Informatics Data and Tools - Using Data to Support Molecular and Precision Medicine' provided great insight in to the challenges and successes many organisations are experiencing in handling this growing volume of biomedical data available for analysis.

To manage and best use both raw and metadata, many organisations are developing their own solutions or bringing in solutions to make this process reproducible, scalable and efficient, improving the speed with which new drug targets can be identified and validated. 

Juergen Hammer, Global Head Data Science, Roche Innovation Centre, New York described some of the challenges faced by Roche managing the increasing data volumes across different Roche sites. Hammer explained how data commons and reducing data silos can help enable advanced analytics such as deep learning and machine learning. To address this Roche are using a hybrid model approach, using on-premise solutions encompassing Arvados and Genestack* integration to provide structured access to over 450 TB of distributed raw and meta-data. This currently incorporates >13 NGS workflows and will ultimately be possible to aggregate data from different sources including omics, imaging, FACs and more. 

Ronghua Chen, Director, Scientific Informatics, Global Research IT, Merck discussed the 4V's of the big data challenge, volume, variety, veracity, and velocity and the time consuming process of setting up data management infrastructure and pipelines before one can even begin to look at the data analysis. With developing technologies such as single cell RNA-Sequencing analysis the volume of data will only increase further, and the resource requirements to manage this data can also grow without the appropriate measures in place. A modular ecosystem approach to manage and access this data, with a wide selection of analytical tools including machine learning approaches, provide a scalable approach with out the need to change the underlying infrastructure. 

 

Use of FAIR guiding principles (Findable, Accessible, Interoperable, Re-Usable) and data commons are some of the models discussed to enable advanced analytics of omic and other biomedical data. Bringing scientific researchers to the data to collaborate, integrating knowledge to reuse and expand further.

No single solution will provide all the answers, and building an ecosystem to address data management, with the ability to build analysis and visualisation tools is the way forward to analyse the volumes of data being generated.

This will ultimately lead to better understanding of disease and the development of effective drug treatments in shorter timeframes.

 

* The Genestack platform brings together a powerful data and metadata management infrastructure, a full suite of bioinformatic pipelines and a range of interactive visual analytics tools. You can read more in our e-book on 'The importance of metadata in genomics and the FAIR principles' at https://genestack.com/ebook-importance-of-metadata-in-genomics/ If you would like to learn more about the Genestack platform please contact us at info@genestack.com 

 
27.02.18

Sign up for our newsletter