PEERS - an open science "Platform for the Exchange of Experimental Research Standards" in biomedicine

2021 
Laboratory workflows and preclinical models have become increasingly diverse and complex. Confronted with the dilemma of assessing a multitude of information with ambiguous relevance for their specific experiments, scientists run the risk of overlooking critical factors that can influence the planning, conduct and results of studies and that should have been considered a priori. Negligence of such crucial information may result in sub-optimal study design and study execution, bringing into question the validity of generated outcomes. As a corollary, a lot of resources are wasted on biomedical research that turns out to be irreproducible and not sufficiently robust for further project development. To address this problem, we present PEERS (Platform for the Exchange of Experimental Research Standards), an open-access online platform that is built to aid scientists in determining which experimental factors and variables are most likely to affect the outcome of a specific test, model or assay and therefore ought to be considered during the design, execution and reporting stages. The PEERS database is categorized into in vivo and in vitro experiments and provides lists of factors derived from scientific literature that have been deemed critical for experimentation. Most importantly, the platform is based on a structured and transparent system for rating the strength of evidence related to each identified factor and its relevance for a specific method/model. In this context, the rating procedure will not solely be limited to the PEERS working group but will also allow for a community-based grading of evidence. To generate a proof-of-concept that the PEERS approach is feasible, we focused on a set of in vitro and in vivo methods from the neuroscience field, which are presented in this article. On the basis of the Open Field paradigm in rodents, we describe the selection of factors specific to each experimental setup and the rating system, but also discuss the identification of additional general items that transcend categories and individual tests. Moreover, we present a working format of the PEERS prototype with its structured information framework for embedding data and critical back end/front end user functionalities. Here, PEERS not only offers users the possibility to search for information to facilitate experimental rigor, but also draws on the engagement of the scientific community to actively expand the information contained within the platform through a standardized approach to data curation and knowledge engineering. As the database grows and benefits become more apparent, we will expand the scope of PEERS to any area of applied biomedical research. Collectively, by helping scientists to search for specific factors relevant to their experiments, and to share experimental knowledge in a standardized manner, PEERS will serve as the ultimate exchange and analysis tool to enhance data validity and robustness as well as the reproducibility of preclinical research. PEERS offers a vetted, independent tool by which to judge the quality of information available on a certain test or model, identifies knowledge gaps and provides guidance on the key methodological considerations that should be prioritized to ensure that preclinical research is conducted to the highest standards and best practice.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    25
    References
    0
    Citations
    NaN
    KQI
    []