Memristive in-memory computing has demonstrated potential for solving matrix equations in scientific computing. However, the inherent inaccuracies of analog mechanisms create challenges in achieving high-precision solutions while maintaining low-energy consumption. This study introduces a memristive matrix equation solver that considerably accelerates solutions by performing mathematical iterations directly within an analog domain. Our approach facilitates rapid approximate solutions with a scalable circuit topology and expedites the high-precision refinement process by substantially reducing the digital-to-analog conversion overhead. We experimentally validated this methodology using a heterogeneous computing system. We performed simulations of multiple scientific problems on these circuits, including solving the diffusion equation and modeling equilibration in silicon P-N junctions. Notably, our memristive solver, combined with digital refinement, achieved software-equivalent precision (with an error of 10 −12 ). Compared to conventional digital processing units, this approach offered a 128-fold improvement in solution speed and a 160-fold reduction in energy consumption. This work establishes a foundation for future scientific computing using imprecise analog devices.
We study experimentally and theoretically acoustic transmission through a bull’s eye structure, consisting of a central hole with concentric grooves imprinted on both sides of a thin brass plate. At wavelength slightly larger than the groove periodicity, a transmission peak was observed for normally incident acoustic wave, with excellent collimation (only ±2° divergence) at far field. This phenomenon is a manifestation of the two-dimensional circular version of structure-factor induced resonant transmission. Theoretical predictions based on this mechanism are in good agreement with the experiments.
We present the design, architecture, and detailed performance of a three-dimensional (3D) underwater acoustic carpet cloak (UACC). The proposed system of the 3D UACC is an octahedral pyramid, which is composed of periodical steel strips. This underwater acoustic device, placed over the target to hide, is able to manipulate the scattered wavefront to mimic a reflecting plane. The effectiveness of the prototype is experimentally demonstrated in an anechoic tank. The measured acoustic pressure distributions show that the 3D UACC can work in all directions in a wide frequency range. This experimental verification of 3D device paves the way for guidelines on future practical applications.
The modified cinchona alkaloid-catalyzed direct Mannich-type reaction of N-unprotected 2-oxindoles with N-Ts-imine was developed to afford anti-3,3-disubstituted 2-oxindoles with vicinal chiral quaternary and tertiary carbon centers in yields up to 90% with excellent diastereoselectivities (anti/syn up to 95:5) and good enantioselectivies (up to 89% ee). A transition model for the anti-diastereo- and enantioselectivity of the reaction was proposed.
In recent decades, machine learning has emerged as a very powerful computational method. Because of its exceptional successes in computer science and engineering, machine learning has ignited research interest in other disciplines, including biology, chemistry, physics, and finance. Machine learning models, which are usually regarded as mathematical models, have traditionally been implemented on the basis of digital computing platform (Figure 1A ). The increasing prevalence of machine learning has been accompanied by a rapid increase of computing requirements, outpacing Moore's law. Therefore, researchers have been committed to the development of analog computing hardware platforms to overcome the inherent limitations of computing resources. Considering that wave physics is an attractive candidate to build analog processor,1Zangeneh-Nejad F. Sounas D.L. Alù A. Fleury R. Analogue computing with metamaterials.Nat. Rev. Mater. 2021; 6: 207-225Crossref Scopus (78) Google Scholar wave-based analog computing platforms are emerging as an important direction to implement machine learning. Most wave-based analog processors are designed on the basis of the mathematical isomorphism between physical systems and conventional machine learning models, such as deep neural networks (DNNs),2Weng J. Ding Y. Hu C. et al.Meta-neural-network for real-time and passive deep-learning-based object recognition.Nat. Commun. 2020; 11: 6309Crossref PubMed Scopus (25) Google Scholar,3Hughes T.W. Williamson I.A.D. Minkov M. Fan S. Wave physics as an analog recurrent neural network.Sci. Adv. 2019; 5: eaay6946Crossref PubMed Scopus (113) Google Scholar implying that analog processors can be trained using standard training techniques for neural networks.However, it remains a huge challenge to design a physical system with strict operation-by-operation mathematical isomorphism, which requires a prohibitive amount of time. In fact, strict mathematical isomorphism is unnecessary to build analog computing platforms. Recently, scientists from Cornell University proposed a hybrid in situ/in silico algorithm, called physics-aware training (PAT), to train physical neural networks (PNNs) with back-propagation.4Wright L.G. Onodera T. Stein M.M. et al.Deep physical neural networks trained with backpropagation.Nature. 2022; 601: 549-555Crossref PubMed Scopus (18) Google Scholar PNNs are composed of layers of controllable physical systems, which lack mathematical isomorphism compared with conventional artificial neural networks. And PAT computes the forward pass on the basis of physical systems instead of training only through numerical simulations. In this way, the impact of the simulation-reality gap on model performance can be significantly reduced, and the performance penalties associated with parameter transformation from numerical simulations to real physical devices can also be avoided. Therefore, PAT allows researchers to construct PNNs from virtually any controllable physical systems and train hardware to perform desired computations. The insights gained from this study will be of great assistance to overcome the physical limitations of computing resources and render machine learning faster, more scalable, and energy efficient.A universal framework of PNNs is shown in Figure 1B, in which the dark cyan boxes represent controllable physical systems. The input data of PNNs are usually a wave-based signal. And the parameters of PNNs correspond to some adjustable properties of the physical system, which can be trained as the weights of conventional artificial neural networks. Figure 1C shows three examples of controllable physical systems. In the audio-frequency mechanical system, input data and parameters are encoded into time-dependent forces, which can drive the voice coil of a speaker, and that in turn drives the oscillating titanium plate. In the nonlinear optical system, input data and parameters are encoded into the pulses' spectra, which are transformed and mixed nonlinearly by passing through a crystal. In the electronic system, input data are voltage time series, and parameters are trainable scale factors of the voltage time series. The re-scaled voltage time series is then sent to the analog circuit. These systems can perform both linear and nonlinear operations, which are equivalent to common operations in conventional artificial neural networks, such as convolutions and matrix-vector multiplications. Therefore, DNN-like physical computations can be composed of various physical systems with different parameters.Back-propagation algorithms are regarded as a key point for efficient training and good generalization of conventional artificial neural networks. The gradients of transformation in physical systems are required to apply back-propagation algorithms to PNN training. However, these gradients can only be approximated using a finite-difference approach, which makes the training slow for PNNs with a large number of parameters. To overcome the above constraints, in silico training that performs training within numerical simulations is adopted, whereby a differentiable digital model, fmodel, is established to approximate the physical systems. Thus, both forward calculation and back-propagation can be computed quickly in simulations. In this way, the training process will be carried out solely on the computer. And then the trained parameters will be loaded into the physical systems for evaluation.Because of errors between fmodel and the real physical system, it is difficult to directly transfer the trained parameters to real devices for expected performance. To solve this problem, a hybrid algorithm PAT is proposed, which involves computations in both physical and digital domains (Figure 1D). Specifically, the physical system is used to perform the forward pass, which can produce more accurate output than fmodel in silico training. And the differentiable digital model fmodel is used only in the backward pass to calculate the gradients of transformation in the physical system. The universality of PAT algorithms is demonstrated by the successful training of three PNNs composed of different physical systems (see Figure 1C). And the effectiveness of the PNNs is verified through the implementation of image and vowel classification. The experimental results show that the PNN is not only an accurate hierarchical classifier that uses each system's unique physical transformations but also performs machine learning faster and more energy-efficiently compared with conventional electronic processors.It should be noted that the proposed PAT can only be used to train PNNs composed of adjustable nonlinear physical systems. And it is difficult to integrate such physical systems in a small size. Thus, the stabilities and integrations of adjustable nonlinear physical structures remain critical challenges. In model training, data collection needs to be performed on the basis of the physical systems, which presents obstacles to apply parallel computing to the model training. Consequently, it will take a prohibitive amount of time to train PNNs for complex tasks using the proposed PAT. Notwithstanding these limitations, this work is of assistance to the application of DNN-based analog computing hardware platforms, particularly those in which physical data, rather than digital data, are processed or produced. PNNs perform partial computations on data within their physical domain, so smart sensors can pre-process physical information before conversion to the electronic domain (such as wave imaging and object recognizing through a multiple-scattering environment). PNNs can be further incorporated into hybrid sensing systems composed of a trainable physical front end and an all-digital machine learning-based back end.5Li J. Mengu D. Yardimci N.T. et al.Spectrally encoded single-pixel machine vision using diffractive networks.Sci. Adv. 2021; 7: eabd7690Crossref PubMed Scopus (33) Google Scholar The spatial and temporal information carried by the wave fields is first encoded by the physical mechanism during the measurement process. Then the collected data can be decoded using machine learning to extract the desired information. The hybrid system can be regarded as a collaboration framework of analog computing and digital computing, which can be jointly trained through the error back-propagation between the physical front end and the digital DNN-based back end. Therefore, the physical front-end can be interpreted as a trainable layer of the machine learning model. And the jointly learned measurement and processing settings can yield considerably higher speed of operation and processing efficiency and lower power consumption, particularly when practical data originate from measurement operations with a large number of analog sensors. The hybrid sensing system will break down more barriers in conventional sensing to provide more useful information that could not be captured before.Although this work focuses on a classification model, the proposed method can also be extended to a regression model, even to a deep reinforcement learning model. Even though the physical realizations of PNNs remain a limitation, this work opens a new approach to analog computing with a wide range of potential applications. In recent decades, machine learning has emerged as a very powerful computational method. Because of its exceptional successes in computer science and engineering, machine learning has ignited research interest in other disciplines, including biology, chemistry, physics, and finance. Machine learning models, which are usually regarded as mathematical models, have traditionally been implemented on the basis of digital computing platform (Figure 1A ). The increasing prevalence of machine learning has been accompanied by a rapid increase of computing requirements, outpacing Moore's law. Therefore, researchers have been committed to the development of analog computing hardware platforms to overcome the inherent limitations of computing resources. Considering that wave physics is an attractive candidate to build analog processor,1Zangeneh-Nejad F. Sounas D.L. Alù A. Fleury R. Analogue computing with metamaterials.Nat. Rev. Mater. 2021; 6: 207-225Crossref Scopus (78) Google Scholar wave-based analog computing platforms are emerging as an important direction to implement machine learning. Most wave-based analog processors are designed on the basis of the mathematical isomorphism between physical systems and conventional machine learning models, such as deep neural networks (DNNs),2Weng J. Ding Y. Hu C. et al.Meta-neural-network for real-time and passive deep-learning-based object recognition.Nat. Commun. 2020; 11: 6309Crossref PubMed Scopus (25) Google Scholar,3Hughes T.W. Williamson I.A.D. Minkov M. Fan S. Wave physics as an analog recurrent neural network.Sci. Adv. 2019; 5: eaay6946Crossref PubMed Scopus (113) Google Scholar implying that analog processors can be trained using standard training techniques for neural networks. However, it remains a huge challenge to design a physical system with strict operation-by-operation mathematical isomorphism, which requires a prohibitive amount of time. In fact, strict mathematical isomorphism is unnecessary to build analog computing platforms. Recently, scientists from Cornell University proposed a hybrid in situ/in silico algorithm, called physics-aware training (PAT), to train physical neural networks (PNNs) with back-propagation.4Wright L.G. Onodera T. Stein M.M. et al.Deep physical neural networks trained with backpropagation.Nature. 2022; 601: 549-555Crossref PubMed Scopus (18) Google Scholar PNNs are composed of layers of controllable physical systems, which lack mathematical isomorphism compared with conventional artificial neural networks. And PAT computes the forward pass on the basis of physical systems instead of training only through numerical simulations. In this way, the impact of the simulation-reality gap on model performance can be significantly reduced, and the performance penalties associated with parameter transformation from numerical simulations to real physical devices can also be avoided. Therefore, PAT allows researchers to construct PNNs from virtually any controllable physical systems and train hardware to perform desired computations. The insights gained from this study will be of great assistance to overcome the physical limitations of computing resources and render machine learning faster, more scalable, and energy efficient. A universal framework of PNNs is shown in Figure 1B, in which the dark cyan boxes represent controllable physical systems. The input data of PNNs are usually a wave-based signal. And the parameters of PNNs correspond to some adjustable properties of the physical system, which can be trained as the weights of conventional artificial neural networks. Figure 1C shows three examples of controllable physical systems. In the audio-frequency mechanical system, input data and parameters are encoded into time-dependent forces, which can drive the voice coil of a speaker, and that in turn drives the oscillating titanium plate. In the nonlinear optical system, input data and parameters are encoded into the pulses' spectra, which are transformed and mixed nonlinearly by passing through a crystal. In the electronic system, input data are voltage time series, and parameters are trainable scale factors of the voltage time series. The re-scaled voltage time series is then sent to the analog circuit. These systems can perform both linear and nonlinear operations, which are equivalent to common operations in conventional artificial neural networks, such as convolutions and matrix-vector multiplications. Therefore, DNN-like physical computations can be composed of various physical systems with different parameters. Back-propagation algorithms are regarded as a key point for efficient training and good generalization of conventional artificial neural networks. The gradients of transformation in physical systems are required to apply back-propagation algorithms to PNN training. However, these gradients can only be approximated using a finite-difference approach, which makes the training slow for PNNs with a large number of parameters. To overcome the above constraints, in silico training that performs training within numerical simulations is adopted, whereby a differentiable digital model, fmodel, is established to approximate the physical systems. Thus, both forward calculation and back-propagation can be computed quickly in simulations. In this way, the training process will be carried out solely on the computer. And then the trained parameters will be loaded into the physical systems for evaluation. Because of errors between fmodel and the real physical system, it is difficult to directly transfer the trained parameters to real devices for expected performance. To solve this problem, a hybrid algorithm PAT is proposed, which involves computations in both physical and digital domains (Figure 1D). Specifically, the physical system is used to perform the forward pass, which can produce more accurate output than fmodel in silico training. And the differentiable digital model fmodel is used only in the backward pass to calculate the gradients of transformation in the physical system. The universality of PAT algorithms is demonstrated by the successful training of three PNNs composed of different physical systems (see Figure 1C). And the effectiveness of the PNNs is verified through the implementation of image and vowel classification. The experimental results show that the PNN is not only an accurate hierarchical classifier that uses each system's unique physical transformations but also performs machine learning faster and more energy-efficiently compared with conventional electronic processors. It should be noted that the proposed PAT can only be used to train PNNs composed of adjustable nonlinear physical systems. And it is difficult to integrate such physical systems in a small size. Thus, the stabilities and integrations of adjustable nonlinear physical structures remain critical challenges. In model training, data collection needs to be performed on the basis of the physical systems, which presents obstacles to apply parallel computing to the model training. Consequently, it will take a prohibitive amount of time to train PNNs for complex tasks using the proposed PAT. Notwithstanding these limitations, this work is of assistance to the application of DNN-based analog computing hardware platforms, particularly those in which physical data, rather than digital data, are processed or produced. PNNs perform partial computations on data within their physical domain, so smart sensors can pre-process physical information before conversion to the electronic domain (such as wave imaging and object recognizing through a multiple-scattering environment). PNNs can be further incorporated into hybrid sensing systems composed of a trainable physical front end and an all-digital machine learning-based back end.5Li J. Mengu D. Yardimci N.T. et al.Spectrally encoded single-pixel machine vision using diffractive networks.Sci. Adv. 2021; 7: eabd7690Crossref PubMed Scopus (33) Google Scholar The spatial and temporal information carried by the wave fields is first encoded by the physical mechanism during the measurement process. Then the collected data can be decoded using machine learning to extract the desired information. The hybrid system can be regarded as a collaboration framework of analog computing and digital computing, which can be jointly trained through the error back-propagation between the physical front end and the digital DNN-based back end. Therefore, the physical front-end can be interpreted as a trainable layer of the machine learning model. And the jointly learned measurement and processing settings can yield considerably higher speed of operation and processing efficiency and lower power consumption, particularly when practical data originate from measurement operations with a large number of analog sensors. The hybrid sensing system will break down more barriers in conventional sensing to provide more useful information that could not be captured before. Although this work focuses on a classification model, the proposed method can also be extended to a regression model, even to a deep reinforcement learning model. Even though the physical realizations of PNNs remain a limitation, this work opens a new approach to analog computing with a wide range of potential applications. This work is supported by the Key-Area Research and Development Program of Guangdong Province (grant 2020B010190002 ), the National Natural Science Foundation of China (grants 11874383 and 12104480 ), and the IACAS Frontier Exploration Project (grant QYTS202110 ). The authors declare no competing interests.
In this paper, a rigid surface decorated with an array of grooves with graded widths is proposed to get spatial separation of the spoof surface acoustic waves. Because of the intermodal coupling between forward and backward modes on the graded structure, the spoof surface acoustic waves with different frequencies stop propagating ahead and reflect back at different positions of the graded groove grating. The intensity of acoustic field is effectively enhanced near the propagation-stop position due to the slow group velocity. We believe that such system with the capability of energy concentration and wave spatial arrangement by frequencies has potential applications in acoustic wave coupling and absorption.
Wireless Internet of Things (IoT) is widely accepted in data collection and transmission of power system, with the prerequisite that the base station of wireless IoT be compatible with a variety of digital modulation types to meet data transmission requirements of terminals with different modulation modes. As a key technology in wireless IoT communication, Automatic Modulation Classification (AMC) manages resource shortage and improves spectrum utilization efficiency. And for better accuracy and efficiency in the classification of wireless signal modulation, Deep learning (DL) is frequently exploited. It is found in real cases that the signal-to-noise ratio (SNR) of wireless signals received by base station remains low due to complex electromagnetic interference from power equipment, increasing difficulties for accurate AMC. Therefore, inspired by attention mechanism of multi-layer perceptron (MLP), AMC-MLP is introduced herein as a novel AMC method for low SNR signals. Firstly, the sampled I/Q data is converted to constellation diagram, smoothed pseudo Wigner-Ville distribution (SPWVD), and contour diagram of the spectral correlation function (SCF). Secondly, convolution auto-encoder (Conv-AE) is used to denoise and extract image feature vectors. Finally, MLP is employed to fuse multimodal features to classify signals. AMC-MLP model utilizes the characterization advantages of feature images in different modulation modes and boosts the classification accuracy of low SNR signals. Results of simulations on RadioML 2016.10A public dataset prove as well that AMC-MLP provides significantly better classification accuracy of signals in low SNR range than that of other latest deep-learning AMC methods.
Abstract Aiming at the problems of incomplete dehazing of a single image and unnaturalness of the restored image, a multi‐scale single‐image defogging network with local features fused with global features is proposed, using fog and non‐fogging image pairs train the network in a direct end‐to‐end manner. The network is divided into global feature extraction module, multi‐scale feature extraction module and deep fusion module. The global feature extraction module extracts global features that characterize the contour; multi‐scale feature extraction module extracts features at different scales to improve learning accuracy; in the deep fusion module, the convolutional layer extracts the local features that describe the image content, and then the local features and the global features are merged through skip connections. Comparative experiments were carried out on artificially synthesized fog images and real fog images. The experimental results show that the algorithm proposed here can achieve the ideal dehazing effect, and is superior to other comparison algorithms in subjective and objective aspects.
Nonsynchronous vibrations (NSVs) with high amplitude levels in the first rotor blades of a multi-stage axial compressor have been observed. The excitation is aerodynamically caused and associated with a unsteady flow field, including sound field. In order to investigate the characteristics of sound field in the axial compressor, the noise inner compressor casing are measured simultaneously with the vibration of the rotor blades on a high pressure compressor component rig testing. The results show that noise with specific frequency structures appear in the axial compressor under a pre-arranged structure adjustment and the specific operating conditions, and the noise spectrum characteristics are analyzed detailedly. Some influence factors such as rotating speed and corrected mass flow rate on noise characteristics are discussed emphatically. The results presented in this paper can be a reference for further understand of the characteristics of unsteady flow field and the effects of the high intensity sound waves on the rotor blades.