Abstract Three non‐covalent metallotetraphenylporphyrin/fullerene (MTPPS 4 (M=Zn 2+ , Fe 2+ , Co 2+ )/C 60 ) nanocomposites were prepared by π‐π molecular interaction and characterized by scanning electron microscopy and UV‐Vis absorption spectroscopy. Electrocatalytic studies indicated that the MTPPS 4 /C 60 nanocomposites which were embedded in TOAB film on the glassy carbon electrode (GCE) (TOAB/MTPPS 4 /C 60 /GCE) exhibited a high electrocatalytic activity for H 2 O 2 . MTPPS 4 enhanced the electrocatalytic ability of C 60 in the increasing order of TOAB/ZnTPPS 4 /C 60 /GCE, TOAB/FeTPPS 4 /C 60 /GCE and TOAB/CoTPPS 4 /C 60 /GCE. The measurement with the differential pulse voltammetry (DPV) exhibited that there is a well‐defined linear relationship between the reduction currents and H 2 O 2 concentrations in the range from 0.3 to 1.0 mM, with the detection limit of 0.07 mM at the TOAB/ZnTPPS 4 /C 60 /GCE electrode, of 0.08 mM at the TOAB/FeTPPS 4 /C 60 /GCE electrode, of 0.04 mM at the TOAB/CoTPPS 4 /C 60 /GCE electrode, respectively. The biosensors showed a good anti‐interfering ability towards glucose, ascorbic acid and L‐cysteine and a high potential practicality.
Vision transformers (ViTs) are changing the landscape of object detection approaches. A natural usage of ViTs in detection is to replace the CNN-based backbone with a transformer-based backbone, which is straightforward and effective, with the price of bringing considerable computation burden for inference. More subtle usage is the DETR family, which eliminates the need for many hand-designed components in object detection but introduces a decoder demanding an extra-long time to converge. As a result, transformer-based object detection can not prevail in large-scale applications. To overcome these issues, we propose a novel decoder-free fully transformer-based (DFFT) object detector, achieving high efficiency in both training and inference stages, for the first time. We simplify objection detection into an encoder-only single-level anchor-based dense prediction problem by centering around two entry points: 1) Eliminate the training-inefficient decoder and leverage two strong encoders to preserve the accuracy of single-level feature map prediction; 2) Explore low-level semantic features for the detection task with limited computational resources. In particular, we design a novel lightweight detection-oriented transformer backbone that efficiently captures low-level features with rich semantics based on a well-conceived ablation study. Extensive experiments on the MS COCO benchmark demonstrate that DFFT_SMALL outperforms DETR by 2.5% AP with 28% computation cost reduction and more than $10$x fewer training epochs. Compared with the cutting-edge anchor-based detector RetinaNet, DFFT_SMALL obtains over 5.5% AP gain while cutting down 70% computation cost.
Abstract The relationship between active DNA demethylation induced by overexpressing Tet1 and passive DNA demethylation induced by suppressing Dnmt1 remains unclear. Here, we found that DNMT1 preferentially methylated, but TET1 preferentially demethylated, hemi-methylated CpG sites. These phenomena resulted in a significant overlap in the targets of these two types of DNA demethylation and the counteractions of Dnmt1 and Tet1 during somatic cell reprogramming. Since the hemi-methylated CpG sites generated during cell proliferation were enriched at core pluripotency loci, DNA demethylation induced by Tet1 or sh-RNA against Dnmt1 ( sh-Dnmt1 ) was enriched in these loci, which, in combination with Yamanaka factors, led to the up-regulation of these genes and promoted somatic cell reprogramming. In addition, since sh-Dnmt1 induces DNA demethylation by impairing the further methylation of hemi-methylated CpG sites generated during cell proliferation, while Tet1 induced DNA demethylation by demethylating these hemi-methylated CpG sites, Tet1 -induced DNA demethylation, compared with sh-Dnmt1-induced DNA demethylation, exhibited a higher ability to open the chromatin structure and up-regulate gene expression. Thus, Tet1 -induced but not sh-Dnmt1 -induced DNA demethylation led to the up-regulation of an additional set of genes that can promote the epithelial-mesenchymal transition and impair reprogramming. When vitamin C was used to further increase the demethylation ability of TET1 during reprogramming, Tet1 induced a larger up-regulation of these genes and significantly impaired reprogramming. Therefore, the current studies provide additional information regarding DNA demethylation during somatic cell reprogramming.
Boosted by large and standardized benchmark datasets, visual object tracking has made great progress in recent years and brought about many new trackers. Among these trackers, correlation filter based tracking schema exhibits impressive robustness and accuracy. In this work, we present a fully functional correlation filter based tracking algorithm which is able to simultaneously model target appearance changes from spatial displacements, scale variations, and rotation transformations. The proposed tracker first represents the exhaustive template searching in the joint scale and spatial space by a block-circulant matrix. Then, by transferring the target template from the Cartesian coordinate system to the Log-Polar coordinate system, the circulant structure is well preserved for the target even after whole orientation rotation. With these novel representation and transformation, object tracking is efficiently and effectively performed in the joint space with fast Fourier Transform. Experimental results on the VOT2015 benchmark dataset demonstrate its superior performance over state-of-the-art tracking algorithms.
Vision foundation models have been explored recently to build general-purpose vision systems. However, predominant paradigms, driven by casting instance-level tasks as an object-word alignment, bring heavy cross-modality interaction, which is not effective in prompting object detection and visual grounding. Another line of work that focuses on pixel-level tasks often encounters a large annotation gap of things and stuff, and suffers from mutual interference between foreground-object and background-class segmentation. In stark contrast to the prevailing methods, we present APE, a universal visual perception model for aligning and prompting everything all at once in an image to perform diverse tasks, i.e., detection, segmentation, and grounding, as an instance-level sentence-object matching paradigm. Specifically, APE advances the convergence of detection and grounding by reformulating language-guided grounding as open-vocabulary detection, which efficiently scales up model prompting to thousands of category vocabularies and region descriptions while maintaining the effectiveness of cross-modality fusion. To bridge the granularity gap of different pixel-level tasks, APE equalizes semantic and panoptic segmentation to proxy instance learning by considering any isolated regions as individual instances. APE aligns vision and language representation on broad data with natural and challenging characteristics all at once without task-specific fine-tuning. The extensive experiments on over 160 datasets demonstrate that, with only one-suit of weights, APE outperforms (or is on par with) the state-of-the-art models, proving that an effective yet universal perception for anything aligning and prompting is indeed feasible. Codes and trained models are released at https://github.com/shenyunhang/APE.