Graph Neural Network (GNN) has achieved state-of-the-art performance in various high-stake prediction tasks, but multiple layers of aggregations on graphs with irregular structures make GNN a less interpretable model. Prior methods use simpler subgraphs to simulate the full model, or counterfactuals to identify the causes of a prediction. The two families of approaches aim at two distinct objectives, "simulatability" and "counterfactual relevance", but it is not clear how the objectives can jointly influence the human understanding of an explanation. We design a user study to investigate such joint effects and use the findings to design a multi-objective optimization (MOO) algorithm to find Pareto optimal explanations that are well-balanced in simulatability and counterfactual. Since the target model can be of any GNN variants and may not be accessible due to privacy concerns, we design a search algorithm using zeroth-order information without accessing the architecture and parameters of the target model. Quantitative experiments on nine graphs from four applications demonstrate that the Pareto efficient explanations dominate single-objective baselines that use first-order continuous optimization or discrete combinatorial search. The explanations are further evaluated in robustness and sensitivity to show their capability of revealing convincing causes while being cautious about the possible confounders. The diverse dominating counterfactuals can certify the feasibility of algorithmic recourse, that can potentially promote algorithmic fairness where humans are participating in the decision-making using GNN.
Probabilistic graphical models, such as Markov random fields (MRF), exploit dependencies among random variables to model a rich family of joint probability distributions. Inference algorithms, such as belief propagation (BP), can effectively compute the marginal posteriors for decision making. Nonetheless, inferences involve sophisticated probability calculations and are difficult for humans to interpret. Among all existing explanation methods for MRFs, no method is designed for fair attributions of an inference outcome to elements on the MRF where the inference takes place. Shapley values provide rigorous attributions but so far have not been studied on MRFs. We thus define Shapley values for MRFs to capture both probabilistic and topological contributions of the variables on MRFs. We theoretically characterize the new definition regarding independence, equal contribution, additivity, and submodularity. As brute-force computation of the Shapley values is challenging, we propose GraphShapley, an approximation algorithm that exploits the decomposability of Shapley values, the structure of MRFs, and the iterative nature of BP inference to speed up the computation. In practice, we propose meta-explanations to explain the Shapley values and make them more accessible and trustworthy to human users. On four synthetic and nine real-world MRFs, we demonstrate that GraphShapley generates sensible and practical explanations.
The success of deep neural networks and their potential use in many safety-critical applications has motivated research on formal verification of deep neural networks. A fundamental primitive enabling the formal analysis of neural networks is the output range analysis. Existing approaches on output range analysis either focus on some simple activation functions, such as $\text{relu,}$ or compute a relaxed result for some activation functions, such as exponential linear unit $\text{({elu}}$ ). In this article, we propose an approach to compute the output range for feed-forward deep neural networks via linear programming. The key idea is to encode the activation functions, such as $\text{{elu}}$ and $\text{sigmoid}$ , as linear constraints in term of the line between the left and right end-points of the input range and the tangent lines on some special points in the input range. A strategy to partition the network to get a tighter range is presented. The experimental results show that our approach gets a tighter result than RobustVerifier on $\text{{elu}}$ networks and $\text{sigmoid}$ networks. Moreover, our approach performs better than (the linear encodings implemented in) Crown on $\text{{elu}}$ networks with $\alpha =0.5, 1.0$ and $\text{sigmoid}$ networks, and better than CNN-Cert and DeepCert on $\text{{elu}}$ networks with $\alpha = 0.5$ or 1.0. For $\text{{elu}}$ networks with $\alpha = 2.0$ , our approach can achieve results that are closed to Crown, CNN-Cert, and DeepCert. Finally, we also found that the network partition helps to achieve a tighter result as well as to improve the efficiency for $\text{{elu}}$ networks.
Graph Neural Network (GNN) has achieved state-of-the-art performance in various high-stake prediction tasks, but multiple layers of aggregations on graphs with irregular structures make GNN a less interpretable model. Prior methods use simpler subgraphs to simulate the full model, or counterfactuals to identify the causes of a prediction. The two families of approaches aim at two distinct objectives, "simulatability" and "counterfactual relevance", but it is not clear how the objectives can jointly influence the human understanding of an explanation. We design a user-study to investigate such joint effects, and use the findings to design a multi-objective optimization (MOO) algorithm to find Pareto optimal explanations that are well-balanced in simulatability and counterfactual. Since the target model can be of any GNN variants and may not be accessible due to privacy concerns, we design a search algorithm using zero-th order information without accessing the architecture and parameters of the target model. Quantitative experiments on nine graphs from four applications demonstrate that the Pareto efficient explanations dominate single-objective baselines that use first-order continuous optimization or discrete combinatorial search. The explanations are further evaluated in robustness and sensitivity to show their capability of revealing convincing causes, while being cautious about the possible confounders. The diverse dominating counterfactuals can certify the feasibility of algorithmic recourse, that can potentially promote algorithmic fairness where humans are participating in the decision-making using GNN.
Coffee beans contain numerous bioactive components that exhibit antioxidant capacity when assessed using both chemical, cell free, and biological, cell-based model systems. However, the mechanisms underlying the antioxidant effects of coffee in biological systems are not totally understood and in some cases vary considerably from results obtained with simpler in vitro chemical assays. In the present study, the physicochemical characteristics and antioxidant activity of roasted and non-roasted coffee extracts were investigated in both cell free (ORACFL) and cell-based systems. A profile of antioxidant gene expression in cultured human colon adenocarcinoma Caco-2 cells treated with both roasted and non-roasted coffee extracts, respectively, was investigated using Real-Time polymerase chain reaction (PCR) array technology. Results demonstrated that the mechanisms of the antioxidant activity associated with coffee constituents assessed by the ORACFL assay were different to those observed using an intracellular oxidation assay with Caco-2 cells. Moreover, roasted coffee (both light and dark roasted) extracts produced both increased- and decreased-expressions of numerous genes that are involved in the management of oxidative stress via the antioxidant defence system. The selective and specific positive induction of antioxidant response element (ARE)-dependent genes, including gastrointestinal glutathione peroxidase (GPX2), sulfiredoxin (SRXN1), thioredoxin reductase 1 (TXNRD1), peroxiredoxin 1 (PRDX1), peroxiredoxin 4 (PDRX4) and peroxiredoxin 6 (PDRX6) were identified with the activation of the endogenous antioxidant defence system in Caco-2 cells.
Abstract: The thermal stability of L‐5‐methyltetrafolic acid (L‐5‐MTHF) was investigated in model/buffer systems and food systems. L‐5‐MTHF degradation followed first‐order reaction kinetics with relatively greater ( P < 0.01) stability at pH 4 compared to pH 6.8 in the buffer systems. This was confirmed using cyclic voltammetry. The stability (for example, k ‐values) of L‐5‐MTHF in an oxygen controlled environment improved ( P < 0.001) proportionally when in the presence of increasing molar ratios of sodium ascorbate (NaAsc). The addition of NaAsc to L‐5‐MTHF after heat treatment was also effective at returning thermally oxidized L‐5‐MTHF back to its original form. A scheme was developed to explain the degradation and regeneration of L‐5‐MTHF. The importance of antioxidant protection of L‐5‐MTHF from thermal oxidation was extended using 2 distinct food systems; namely skim milk and soy milk, both with known antioxidant capacities. We conclude that the antioxidant activity of food components can enhance the stability of L‐5‐MTHF.