Background: Foot surgery is common in patients with RA but research into surgical outcomes is limited and conceptually flawed as current outcome measures lack face validity: to date no one has asked patients what is important to them. This study aimed to determine which factors are important to patients when evaluating the success of foot surgery in RA Methods: Semi structured interviews of RA patients who had undergone foot surgery were conducted and transcribed verbatim. Thematic analysis of interviews was conducted to explore issues that were important to patients. Results: 11 RA patients (9 ♂, mean age 59, dis dur = 22yrs, mean of 3 yrs post op) with mixed experiences of foot surgery were interviewed. Patients interpreted outcome in respect to a multitude of factors, frequently positive change in one aspect contrasted with negative opinions about another. Overall, four major themes emerged. Function: Functional ability & participation in valued activities were very important to patients. Walking ability was a key concern but patients interpreted levels of activity in light of other aspects of their disease, reflecting on change in functional ability more than overall level. Positive feelings of improved mobility were often moderated by negative self perception ("I mean, I still walk like a waddling duck"). Appearance: Appearance was important to almost all patients but perhaps the most complex theme of all. Physical appearance, foot shape, and footwear were closely interlinked, yet patients saw these as distinct separate concepts. Patients need to legitimize these feelings was clear and they frequently entered into a defensive repertoire ("it's not cosmetic surgery; it's something that's more important than that, you know?"). Clinician opinion: Surgeons' post operative evaluation of the procedure was very influential. The impact of this appraisal continued to affect patients' lasting impression irrespective of how the outcome compared to their initial goals ("when he'd done it … he said that hasn't worked as good as he'd wanted to … but the pain has gone"). Pain: Whilst pain was important to almost all patients, it appeared to be less important than the other themes. Pain was predominately raised when it influenced other themes, such as function; many still felt the need to legitimize their foot pain in order for health professionals to take it seriously ("in the end I went to my GP because it had happened a few times and I went to an orthopaedic surgeon who was quite dismissive of it, it was like what are you complaining about"). Conclusions: Patients interpret the outcome of foot surgery using a multitude of interrelated factors, particularly functional ability, appearance and surgeons' appraisal of the procedure. While pain was often noted, this appeared less important than other factors in the overall outcome of the surgery. Future research into foot surgery should incorporate the complexity of how patients determine their outcome Disclosure statement: All authors have declared no conflicts of interest.
Anti-transferrin receptor (TfR)-based bispecific antibodies have shown promise for boosting antibody uptake in the brain. Nevertheless, there are limited data on the molecular properties, including affinity required for successful development of TfR-based therapeutics. A complex nonmonotonic relationship exists between affinity of the anti-TfR arm and brain uptake at therapeutically relevant doses. However, the quantitative nature of this relationship and its translatability to humans is heretofore unexplored. Therefore, we developed a mechanistic pharmacokinetic-pharmacodynamic (PK-PD) model for bispecific anti-TfR/BACE1 antibodies that accounts for antibody-TfR interactions at the blood-brain barrier (BBB) as well as the pharmacodynamic (PD) effect of anti-BACE1 arm. The calibrated model correctly predicted the optimal anti-TfR affinity required to maximize brain exposure of therapeutic antibodies in the cynomolgus monkey and was scaled to predict the optimal affinity of anti-TfR bispecifics in humans. Thus, this model provides a framework for testing critical translational predictions for anti-TfR bispecific antibodies, including choice of candidate molecule for clinical development.
Several publications describing the use of anti-CD40L monoclonal antibodies (anti-CD40L) for the treatment of type 1 diabetes in non-obese diabetic (NOD) mice have reported different treatment responses to similar protocols. The Entelos Type 1 Diabetes PhysioLab platform, a dynamic large-scale mathematical model of the pathogenesis of type 1 diabetes, was used to study the effects of anti-CD40L therapy in silico. An examination of the impact of pharmacokinetic variability and the heterogeneity of disease progression rate on therapeutic outcome provided insights that could reconcile the apparently conflicting data. Optimal treatment protocols were identified by exploring the dynamics of key pathophysiological pathways.
Abstract : Type 1 diabetes is a complex, multifactorial disease characterized by T cell–mediated autoimmune destruction of insulin‐secreting pancreatic β cells. To facilitate research in type 1 diabetes, a large‐scale dynamic mathematical model of the female non‐obese diabetic (NOD) mouse was developed. In this model, termed the Entelos® Type 1 Diabetes PhysioLab® platform, virtual NOD mice are constructed by mathematically representing components of the immune system and islet β cell physiology important for the pathogenesis of type 1 diabetes. This report describes the scope of the platform and illustrates some of its capabilities. Specifically, using two virtual NOD mice with either average or early diabetes‐onset times, we demonstrate the reproducibility of experimentally observed dynamics involved in diabetes progression, therapeutic responses to exogenous IL‐10, and heterogeneity in disease onset. Additionally, we use the Type 1 Diabetes PhysioLab platform to investigate the impact of disease heterogeneity on the effectiveness of exogenous IL‐10 therapy to prevent diabetes onset. Results indicate that the inability of a previously published IL‐10 therapy protocol to protect NOD mice who exhibit early diabetes onset is due to high levels of pancreatic lymph node (PLN) inflammation, islet infiltration, and β cell destruction at the time of treatment initiation. Further, simulation indicates that earlier administration of the treatment protocol can prevent NOD mice from developing diabetes by initiating treatment during the period when the disease is still sensitive to IL‐10's protective function.
Systems pharmacology models are having an increasing impact on pharmaceutical research and development from preclinical through postapproval phases, including use in regulatory interactions. Given the wide diversity among the models and the contexts of use, a common but flexible strategy for model assessment is needed to enable the appropriate interpretation of model-based results. We present an approach to evaluate these models and discuss how it can be customized to available data and intended application. A wide range of modeling approaches, including empirical, mechanistic pharmacokinetic/pharmacodynamic, and quantitative systems pharmacology (QSP), can be applied toward pharmaceutical research and development. The evaluation of these models is critical to understanding their strengths/limitations and interpreting model results. Assessment typically involves evaluating fits to observed data and testing predictive capabilities where possible. Pharmacokinetic/pharmacodynamic models are routinely evaluated by goodness-of-fit plots, predictive checks, and external validations focused on capturing output data under the premise of parsimony.1 In contrast, QSP models focus on the representation of underlying biological systems and address questions that involve exploration of mechanism and extrapolation to novel scenarios. QSP models are thus frequently and by necessity complex and underconstrained, leading to confusion around how QSP models can be appropriately evaluated.2 Previous QSP tutorials have presented considerations in planning, developing, qualifying, and applying systems models.3, 4 Here, we focus on model assessment, defining four major assessment areas (biology, implementation, simulation, and robustness), and suggest activities that can be customized based on the context of the work, mapping these efforts to previously presented QSP workflow stages and qualification criteria (Table 1). We illustrate the tailored application of the assessment approach with two published models of cancer signaling. Assessment of the biological relevance is of critical importance in QSP, where utility requires that the biology included is appropriate to address the problem at hand and reflects relevant knowledge, data, and literature. Thus, literature support and input from biological and clinical experts are valuable in assessment. Mechanisms, hypotheses, behaviors, and phenotypes of interest should be articulated to ensure the adequacy of biological scope. QSP models typically include the representation of targets, drugs, biomarkers, and outcomes of interest. Although the scale, breadth, and depth of biological scope differ greatly among applications, a model should minimally include sufficient biological pathways to connect each target or drug to the relevant biomarkers and outcomes, potentially via intermediaries. Assessment of the model implementation involves evaluation of the mathematical formulation and quality checks on the accuracy and veracity of the model, its mathematical structure, the parameters, and their influence on model simulations. The choice of formalism must be consistent with the project goals. Ensuring that the implementation is technically accurate (e.g., correct coding, unit consistency) and appropriate for the mechanisms represented is also essential and may be required in regulatory submission. Once structure and implementation are confirmed, dynamical systems analyses can be used to explore inherent model dynamics and corresponding parameter ranges to assess their relevance. Although structural identifiability of model parameters can be difficult to assess or ensure, the impact of the parameters on the ability to reproduce critical behaviors can be determined via sensitivity analyses that determine how uncertainty in and variability around a given parameter set (local) or throughout parameter space (global) influence model outputs. Other approaches, such as Monte Carlo simulation, that explore model behavior under different parameterizations can also inform this question and ensure consistency with expectations. These methods are used to confirm the ability of the model to generate distinct qualitative features or phenotypes (e.g., ranges of treatment response, different dynamical signaling features)5 or to highlight needed revision of the model biology or mathematics. Assessment of simulation results gauges the qualitative and quantitative plausibility of model simulations with respect to data and biological understanding. Generally, during modeling, parameters are estimated such that model or subsystem outputs match a set of qualitative and/or quantitative criteria. Confirmation that the model satisfactorily recapitulates this training/calibration data is one critical step. However, confidence in model predictions further requires testing the model's ability to prospectively or retrospectively predict data or behaviors not used in model calibration. Ideally, these validation/test experiments should be orthogonal to the calibration data yet fall within the scope of the biology represented. When data are limited, alternative approaches such as leave-one-out cross-validation or iterative calibration, validation, and updating can be considered. Sensitivity analysis that demonstrates appropriate responses to parameter modification can also provide confidence in simulated or predicted behaviors. Where validation against data or other biological knowledge is not demonstrated, prospective simulations should be considered explorations, hypotheses, or "potential outcomes" rather than predictions. In such scenarios, modeling can still provide value by increasing mechanistic understanding, highlighting potential outcomes, and identifying or reducing uncertainties and risks in pharmaceutical research and development. Assessment of the robustness of results ties in many aspects of biology, implementation, and simulations to increase confidence in model-based insight and predictions, specifically their robustness to biological variability and uncertainty (alternate hypotheses or quantitative differences). This assessment focuses on the extent to which the impact of variability or uncertainty in topology or parameters has been considered in predictions and the extent to which variability in data is captured. This can be done through explicit simulation of alternate parameterizations ("virtual subjects")4 or collections thereof that cover input and/or output uncertainty and variability.4, 5 Note that exploration of parameter uncertainty helps address concerns related to parameter identifiabilty. The application of the model and the availability of data determine how and to what extent different assessment approaches are appropriate. Some applications (e.g., clinical trial design) require more robust assessment, whereas a more flexible approach may be sufficient for mechanistic exploration. Decisions with significant safety or financial implications also require more rigorous assessment, as do efforts where modeling is a primary driver for a decision, without parallel evidence. Abundant data enable separate calibration and validation data sets, whereas limited data may necessitate other approaches to testing the model and corresponding caution in interpretation of results. In addition, different mathematical formulations require different mathematical assessment techniques. Although context influences how and to what extent each major assessment area is addressed, all areas should be considered and discussed. Figure 1 shows how contextual considerations can influence the degree of rigor required in model assessment and indicates the different context surrounding example models of cancer signaling pathways.6-9 Here, we discuss how context influences model assessment for two of these studies.6, 7 Many cancers display alterations in mitogen activated protein kinase (MAPK), PI3K, and other intracellular signaling pathways that promote tumor growth. Briefly, the canonical MAPK pathway proceeds from receptor engagement through RAS, RAF, MEK, and ERK phosphorylation to downstream effects on cell growth, survival, and protein translation. Kirouac et al.6 modeled the MAPK pathway based on rich preclinical and limited clinical data to explore the potential utility of a novel ERK inhibitor, especially in the treatment of RAF-mutant BRAFV600E colorectal cancer, to support clinical strategy and ongoing phase I trials. Eduati et al.7 modeled multiple signaling pathways, including MAPK, using in vitro data from a broad set of colorectal cancer lines to investigate diversity in cellular signaling and mechanisms of resistance and to suggest sensitivities for possible therapeutic investigation. In each case, the biology represented is appropriate for the goal. Kirouac et al.6 focus on the MAPK pathway, including targets and mutations of interest, hypothesized resistance mechanisms (receptor redundancy, bypass signaling, feedbacks), and a mechanistic link from signaling to cell/tumor growth. In contrast, Eduati et al.7 address a broader set of signaling pathways and interactions required to identify resistance mechanisms in different cell lines. Kirouac et al.6 use an ordinary diffferential equation formulation appropriate for signaling and growth dynamics, whereas the logic–ordinary diffferential equation implementation of signaling used by Eduati et al.7 facilitates exploration of differential signaling among cells lines while an elastic net model relates signaling model parameters to in vitro cell survival. Full model specifications are provided in each to enable thorough mathematical assessment. Although neither study presents formal structural or dynamical analysis, the Kirouac et al.6 model is shown to capture critical dynamic data, such as feedback-driven ERK rebound. With respect to parameter sensitivity, both studies verify reasonable model sensitivities by demonstrating appropriate responses of diverse cells/tumors to different perturbations (drugs and stimuli), and Eduati et al.7 further analyzes which parameters are most correlated with survival. Both models are calibrated to rich data sets obtained using diverse preclinical models and treatments. Both studies include preclinical validation: Kirouac et al.6 by (retrospective) prediction of growth response for multiple drug combinations and preclinical models and Eduati et al.7 by (prospective) in vitro verification of a novel model-predicted drug combination. To support clinical application, Kirouac et al.6 capture prior clinical trial data in a virtual population and validate quantitative predictions for ERK inhibitor efficacy against emerging clinical results. In contrast, clinical simulation is not the focus of the Eduati et al.7 effort, and thus clinical validation is neither required nor included; instead, they cite ongoing trials as evidence for the relevance of their predictions. Variability is explored in each study using different parameterizations for different cell lines/tumors. Eduati et al.7 emphasize variability in preclinical signaling pathway usage and graphically illustrate the inferred differences. Kirouac et al.6 focus on representing the diversity required to predict clinical response distributions and present both ranges of parameters sampled and the resulting virtual population output variability, although they do not report the final parameter ranges retained in the virtual population. Ultimately, the approaches taken in each study enabled the exploration of differential responsiveness and resistance in the corresponding contexts. Numerous other modeling studies (from Huang and Ferrell8 to Kochańczyk et al.9) have investigated MAPK pathway topology, signal propagation, feedback and crosstalk, and dynamical features. Such studies often perform detailed dynamical and parameter sensitivity analyses, but did not always include nor require extensive model calibration and validation given the exploratory intent of the studies, further illustrating how goals and context help influence assessment strategy. We have proposed a customizable approach for QSP model assessment consistent with previous guidances and tutorials. A technical review of QSP efforts could include an assessment summary describing context of use and listing approaches in each of the four major areas, noting justification and limitations. This could accompany detailed reporting of assessment and modeling results as outlined in Table 1 and in a recent publication from the UK QSP Network.10 This uniform assessment approach, which allows for customization to context of use, could thus support communication and review, including regulatory interactions. Publication and transparent model sharing would further promote assessment and use by the community, as evidenced by the recently described reuse and utilization of a published QSP model by the US Food and Drug Administration.10 Moving forward, common language and libraries of visualizations, analysis scripts or tools, and metrics would facilitate the execution, communication, and review of QSP efforts as it has in the field of pharmacometrics. No funding was received for this work. S.R. is an employee of Genentech Inc. J.R.C. is an employee of Eli Lilly and Co. C.M.F. is an employee of Rosa & Co LLC. C.J.T. is an employee of Bristol-Myers Squibb Co.