Many sectors have seen complete replacement of the in vivo rabbit eye test with reproducible and relevant in vitro and ex vivo methods to assess the eye corrosion/irritation potential of chemicals. However, the in vivo rabbit eye test remains the standard test used for agrochemical formulations in some countries. Therefore, two defined approaches (DAs) for assessing conventional agrochemical formulations were developed, using the EpiOcularTM Eye Irritation Test (EIT) [Organisation for Economic Co-operation and Development (OECD) test guideline (TG) 492] and the Bovine Corneal Opacity and Permeability (OECD TG 437; BCOP) test with histopathology. Presented here are the results from testing 29 agrochemical formulations, which were evaluated against the United States Environmental Protection Agency’s (EPA) pesticide classification system, and assessed using orthogonal validation, rather than direct concordance analysis with the historical in vivo rabbit eye data. Scientific confidence was established by evaluating the methods and testing results using an established framework that considers fitness for purpose, human biological relevance, technical characterisation, data integrity and transparency, and independent review. The in vitro and ex vivo methods used in the DAs were demonstrated to be as or more fit for purpose, reliable and relevant than the in vivo rabbit eye test. Overall, there is high scientific confidence in the use of these DAs for assessing the eye corrosion/irritation potential of agrochemical formulations.
One of the Interagency Coordinating Committee on the Validation of Alternative Method's (ICCVAM) top priorities is the development and evaluation of non-animal approaches to identify potential skin sensitizers. The complexity of biological events necessary to produce skin sensitization suggests that no single alternative method will replace the currently accepted animal tests. ICCVAM is evaluating an integrated approach to testing and assessment based on the adverse outcome pathway for skin sensitization that uses machine learning approaches to predict human skin sensitization hazard. We combined data from three in chemico or in vitro assays – the direct peptide reactivity assay (DPRA), human cell line activation test (h-CLAT) and KeratinoSens™ assay – six physicochemical properties and an in silico read-across prediction of skin sensitization hazard into 12 variable groups. The variable groups were evaluated using two machine learning approaches, logistic regression and support vector machine, to predict human skin sensitization hazard. Models were trained on 72 substances and tested on an external set of 24 substances. The six models (three logistic regression and three support vector machine) with the highest accuracy (92%) used: (1) DPRA, h-CLAT and read-across; (2) DPRA, h-CLAT, read-across and KeratinoSens; or (3) DPRA, h-CLAT, read-across, KeratinoSens and log P. The models performed better at predicting human skin sensitization hazard than the murine local lymph node assay (accuracy 88%), any of the alternative methods alone (accuracy 63–79%) or test batteries combining data from the individual methods (accuracy 75%). These results suggest that computational methods are promising tools to identify effectively the potential human skin sensitizers without animal testing. Published 2016. This article has been contributed to by US Government employees and their work is in the public domain in the USA.
Abstract Access to computationally based visualization tools to navigate chemical space has become more important due to the increasing size and diversity of publicly accessible databases, associated compendiums of high-throughput screening (HTS) results, and other descriptor and effects data. However, application of these techniques requires advanced programming skills that are beyond the capabilities of many stakeholders. Here we report the development of the second version of the ChemMaps.com webserver (https://sandbox.ntp.niehs.nih.gov/chemmaps/) focused on environmental chemical space. The chemical space of ChemMaps.com v2.0, released in 2022, now includes approximately one million environmental chemicals from the EPA Distributed Structure-Searchable Toxicity (DSSTox) inventory. ChemMaps.com v2.0 incorporates mapping of HTS assay data from the U.S. federal Tox21 research collaboration program, which includes results from around 2000 assays tested on up to 10 000 chemicals. As a case example, we showcased chemical space navigation for Perfluorooctanoic Acid (PFOA), part of the Per- and polyfluoroalkyl substances (PFAS) chemical family, which are of significant concern for their potential effects on human health and the environment.
The field of toxicology has witnessed substantial advancements in recent years, particularly with the adoption of new approach methodologies (NAMs) to understand and predict chemical toxicity. Class-based methods such as clustering and classification are key to NAMs development and application, aiding the understanding of hazard and risk concerns associated with groups of chemicals without additional laboratory work. Advances in computational chemistry, data generation and availability, and machine learning algorithms represent important opportunities for continued improvement of these techniques to optimize their utility for specific regulatory and research purposes. However, due to their intricacy, deep understanding and careful selection are imperative to align the adequate methods with their intended applications.
In order to compare between in vivo toxicity studies, dosimetry is needed to translate study-specific dose regimens into dose metrics such as tissue concentration. These tissue concentrations may then be compared with in vitro bioactivity assays to perhaps identify mechanisms relevant to the lowest observed effect level (LOEL) dose group and the onset of the observed in vivo toxicity. Here, we examine the perfluorinated compounds (PFCs) perfluorooctanoate (PFOA) and perfluorooctanesulfonate (PFOS). We analyzed 9 in vivo toxicity studies for PFOA and 13 in vivo toxicity studies for PFOS. Both PFCs caused multiple effects in various test species, strains, and genders. We used a Bayesian pharmacokinetic (PK) modeling framework to incorporate data from 6 PFOA PK studies and 2 PFOS PK studies (conducted in 3 species) to predict dose metrics for the in vivo LOELs and no observed effect levels (NOELs). We estimated PK parameters for 11 combinations of chemical, species, strain, and gender. Despite divergent study designs and species-specific PK, for a given effect, we found that the predicted dose metrics corresponding to the LOELs (and NOELs where available) occur at similar concentrations. In vitro assay results for PFOA and PFOS from EPA's ToxCast project were then examined. We found that most in vitro bioactivity occurs at concentrations lower than the predicted concentrations for the in vivo LOELs and higher than the predicted concentrations for the in vivo NOELs (where available), for a variety of nonimmunological effects. These results indicate that given sufficient PK data, the in vivo LOELs dose regimens, but not necessarily the effects, could have been predicted from in vitro studies for these 2 PFCs.
Abstract The rapid increase of publicly available chemical structures and associated experimental data presents a valuable opportunity to build robust QSAR models for applications in different fields. However, the common concern is the quality of both the chemical structure information and associated experimental data. This is especially true when those data are collected from multiple sources as chemical substance mappings can contain many duplicate structures and molecular inconsistencies. Such issues can impact the resulting molecular descriptors and their mappings to experimental data and, subsequently, the quality of the derived models in terms of accuracy, repeatability, and reliability. Herein we describe the development of an automated workflow to standardize chemical structures according to a set of standard rules and generate two and/or three-dimensional “QSAR-ready” forms prior to the calculation of molecular descriptors. The workflow was designed in the KNIME workflow environment and consists of three high-level steps. First, a structure encoding is read, and then the resulting in-memory representation is cross-referenced with any existing identifiers for consistency. Finally, the structure is standardized using a series of operations including desalting, stripping of stereochemistry (for two-dimensional structures), standardization of tautomers and nitro groups, valence correction, neutralization when possible, and then removal of duplicates. This workflow was initially developed to support collaborative modeling QSAR projects to ensure consistency of the results from the different participants. It was then updated and generalized for other modeling applications. This included modification of the “QSAR-ready” workflow to generate “MS-ready structures” to support the generation of substance mappings and searches for software applications related to non-targeted analysis mass spectrometry. Both QSAR and MS-ready workflows are freely available in KNIME, via standalone versions on GitHub, and as docker container resources for the scientific community. Scientific contribution : This work pioneers an automated workflow in KNIME, systematically standardizing chemical structures to ensure their readiness for QSAR modeling and broader scientific applications. By addressing data quality concerns through desalting, stereochemistry stripping, and normalization, it optimizes molecular descriptors' accuracy and reliability. The freely available resources in KNIME, GitHub, and docker containers democratize access, benefiting collaborative research and advancing diverse modeling endeavors in chemistry and mass spectrometry.
A computational framework was developed to assist in screening and prioritizing chemicals based on their dosimetry, toxicity, and potential exposures. The overall strategy started with contextualizing chemical activity observed in high-throughput toxicity screening (HTS) by mapping these assays to biological events described in Adverse Outcome Pathways (AOPs). Next, in vitro to in vivo (IVIVE) extrapolation was used to convert an in vitro dose to an external exposure level, which was compared with potential exposure levels to derive an AOP-based margins of exposure (MOE). In this study, the framework was applied to estimate MOEs for chemicals that can potentially cause developmental toxicity following a putative AOP for fetal vasculogenesis/angiogenesis. A physiologically based pharmacokinetic (PBPK) model was developed to describe chemical disposition during pregnancy, fetal, neonatal, and infant to adulthood stages. Using this life-stage PBPK model, maternal exposures were estimated that would yield fetal blood levels equivalent to the chemical concentration that altered in vitro activity of selected HTS assays related to the most sensitive vasculogenesis/angiogenesis putative AOP. The resulting maternal exposure estimates were then compared with potential exposure levels using literature data or exposure models to derive AOP-based MOEs.
Since 2009, animal testing for cosmetic products has been prohibited in Europe, and in 2016, US EPA announced their intent to modernize the so-called "6-pack" of acute toxicity tests (acute oral toxicity, acute dermal toxicity, acute inhalation toxicity, skin irritation and corrosion, eye irritation and corrosion, and skin sensitization) and expand acceptance of alternative methods to reduce animal testing of pesticides. We have compiled, curated, and integrated the largest publicly available dataset and developed an ensemble of QSAR models for all six endpoints. All models were validated according to the OECD QSAR principles and tested using newly identified data on compounds not included in the training sets. We have established a publicly accessible Systemic and Topical chemical Toxicity (STopTox) web portal (https://stoptox.mml.unc.edu/) integrating all developed models for “6-pack” assays. This portal can be used by scientists and regulators to identify putative toxicants or non-toxicants in chemical libraries of interest.