CHAPTER 9. Big Data Integration and Inference

2019 
Toxicology data are generated on large scales by toxicogenomic studies and high-throughput screening (HTS) programmes, and on smaller scales by traditional methods. Both big and small data have value for elucidating toxicological mechanisms and pathways that are perturbed by chemical stressors. In addition, years of investigations comprise a wealth of knowledge as reported in the literature that is also used to interpret new data, though knowledge is not often captured in traditional databases. With the big data era, computer automation to analyse and interpret datasets is needed, which requires aggregation of data and knowledge from all available sources. This chapter reviews ongoing efforts to aggregate toxicological knowledge in a knowledge base, based on the Adverse Outcome Pathways framework, and provides examples of data integration and inferential analysis for use in (predictive) toxicology.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []