Biophysical and biochemical cues of biomaterials can regulate cell behaviors. Dental pulp stem cells (DPSCs) in pulp tissues can differentiate to odontoblast-like cells and secrete reparative dentin to form a barrier to protect the underlying pulp tissues and enable complete pulp healing. Promotion of the odontogenic differentiation of DPSCs is essential for dentin regeneration. The effects of the surface potentials of biomaterials on the adhesion and odontogenic differentiation of DPSCs remain unclear. Here, poly(vinylidene fluoride-trifluoro ethylene) (P(VDF-TrFE)) films with different surface potentials were prepared by the spin-coating technique and the contact poling method. The cytoskeletal organization of DPSCs grown on P(VDF-TrFE) films was studied by immunofluorescence staining. Using atomic force microscopy (AFM), the lateral detachment forces of DPSCs from P(VDF-TrFE) films were quantified. The effects of electrical stimulation generated from P(VDF-TrFE) films on odontogenic differentiation of DPSCs were evaluated in vitro and in vivo. The unpolarized, positively polarized, and negatively polarized films had surface potentials of −52.9, +902.4, and −502.2 mV, respectively. DPSCs on both negatively and positively polarized P(VDF-TrFE) films had larger cell areas and length-to-width ratios than those on the unpolarized films (P < 0.05). During the detachment of DPSCs from P(VDF-TrFE) films, the average magnitudes of the maximum detachment forces were 29.4, 72.1, and 53.9 nN for unpolarized, positively polarized, and negatively polarized groups, respectively (P < 0.05). The polarized films enhanced the mineralization activities and increased the expression levels of the odontogenic-related proteins of DPSCs compared to the unpolarized films (P < 0.05). The extracellular signal-regulated kinase (ERK) signaling pathway was involved in the odontogenic differentiation of DPSCs as induced by surface charge. In vivo, the polarized P(VDF-TrFE) films enhanced adhesion of DPSCs and promoted the odontogenic differentiation of DPSCs by electrical stimulation, demonstrating a potential application of electroactive biomaterials for reparative dentin formation in direct pulp capping.
This paper proposes an Any-time super-Resolution Method (ARM) to tackle the over-parameterized single image super-resolution (SISR) models. Our ARM is motivated by three observations: (1) The performance of different image patches varies with SISR networks of different sizes. (2) There is a tradeoff between computation overhead and performance of the reconstructed image. (3) Given an input image, its edge information can be an effective option to estimate its PSNR. Subsequently, we train an ARM supernet containing SISR subnets of different sizes to deal with image patches of various complexity. To that effect, we construct an Edge-to-PSNR lookup table that maps the edge score of an image patch to the PSNR performance for each subnet, together with a set of computation costs for the subnets. In the inference, the image patches are individually distributed to different subnets for a better computation-performance tradeoff. Moreover, each SISR subnet shares weights of the ARM supernet, thus no extra parameters are introduced. The setting of multiple subnets can well adapt the computational cost of SISR model to the dynamically available hardware resources, allowing the SISR task to be in service at any time. Extensive experiments on resolution datasets of different sizes with popular SISR networks as backbones verify the effectiveness and the versatility of our ARM. The source code is available at https://github.com/chenbong/ARM-Net.
Both metabolic switch from oxidative phosphorylation to glycolysis (OGS) and epithelial-mesenchymal transition (EMT) promote cellular reprogramming at early stages.However, their connections have not been elucidated.Here, when a chemically defined medium was used to induce early EMT during mouse reprogramming, a facilitated OGS was also observed at the same time.Additional investigations suggested that the two events formed a positive feedback loop via transcriptional activation, cooperated to upregulate epigenetic factors such as Bmi1, Ctcf, Ezh2, Kdm2b, and Wdr5, and accelerated pluripotency induction at the early stage.However, at late stages, by over-inducing glycolysis and preventing the necessary mesenchymal-epithelial transition, the two events trapped the cells at a new pluripotency state between naïve and primed states and inhibited further reprogramming toward the naïve state.In addition, the pluripotent stem cells at the new state have high similarity to epiblasts from E4.5 and E5.5 embryos, and have distinct characteristics from the previously reported epiblast-like or formative states.Therefore, the time-dependent cooperation between OGS and EMT in regulating pluripotency should extend our understanding of related fields.
Discriminant Correlation Filters (DCF) based methods now become a kind of dominant approach to online object tracking. The features used in these methods, however, are either based on hand-crafted features like HoGs, or convolutional features trained independently from other tasks like image classification. In this work, we present an end-to-end lightweight network architecture, namely DCFNet, to learn the convolutional features and perform the correlation tracking process simultaneously. Specifically, we treat DCF as a special correlation filter layer added in a Siamese network, and carefully derive the backpropagation through it by defining the network output as the probability heatmap of object location. Since the derivation is still carried out in Fourier frequency domain, the efficiency property of DCF is preserved. This enables our tracker to run at more than 60 FPS during test time, while achieving a significant accuracy gain compared with KCF using HoGs. Extensive evaluations on OTB-2013, OTB-2015, and VOT2015 benchmarks demonstrate that the proposed DCFNet tracker is competitive with several state-of-the-art trackers, while being more compact and much faster.
Establishing the long-context capability of large vision-language models is crucial for video understanding, high-resolution image understanding, multi-modal agents and reasoning. We introduce Long-VITA, a simple yet effective large multi-modal model for long-context visual-language understanding tasks. It is adept at concurrently processing and analyzing modalities of image, video, and text over 4K frames or 1M tokens while delivering advanced performances on short-context multi-modal tasks. We propose an effective multi-modal training schema that starts with large language models and proceeds through vision-language alignment, general knowledge learning, and two sequential stages of long-sequence fine-tuning. We further implement context-parallelism distributed inference and logits-masked language modeling head to scale Long-VITA to infinitely long inputs of images and texts during model inference. Regarding training data, Long-VITA is built on a mix of $17$M samples from public datasets only and demonstrates the state-of-the-art performance on various multi-modal benchmarks, compared against recent cutting-edge models with internal data. Long-VITA is fully reproducible and supports both NPU and GPU platforms for training and testing. We hope Long-VITA can serve as a competitive baseline and offer valuable insights for the open-source community in advancing long-context multi-modal understanding.