language-icon Old Web
English
Sign In

Quantum neural network

Quantum neural networks (QNNs) are neural network models which are based on the principles of quantum mechanics. There are two different approaches to QNN research, one exploiting quantum information processing to improve existing neural network models (sometimes also vice versa), and the other one searching for potential quantum effects in the brain. Quantum neural networks (QNNs) are neural network models which are based on the principles of quantum mechanics. There are two different approaches to QNN research, one exploiting quantum information processing to improve existing neural network models (sometimes also vice versa), and the other one searching for potential quantum effects in the brain. In the computational approach to quantum neural network research, scientists try to combine artificial neural network models (which are widely used in machine learning for the important task of pattern classification) with the advantages of quantum information in order to develop more efficient algorithms (for a review, see ). One important motivation for these investigations is the difficulty to train classical neural networks, especially in big data applications. The hope is that features of quantum computing such as quantum parallelism or the effects of interference and entanglement can be used as resources. Since the technological implementation of a quantum computer is still in a premature stage, such quantum neural network models are mostly theoretical proposals that await their full implementation in physical experiments. Quantum neural network research is still in its infancy, and a conglomeration of proposals and ideas of varying scope and mathematical rigor have been put forward. Most of them are based on the idea of replacing classical binary or McCulloch-Pitts neurons with a qubit (which can be called a “quron”), resulting in neural units that can be in a superposition of the state ‘firing’ and ‘resting’. The first ideas on quantum neural computation were published independently in 1995 by Subhash Kak and Ron Chrisley. Kak discussed the similarity of the neural activation function with the quantum mechanical Eigenvalue equation, and later discussed the application of these ideas to the study of brain function and the limitations of this approach requiring the need to postulate the agent in an abstract space. Ajit Narayanan and Tammy Menneer proposed a photonic implementation of a quantum neural network model that is based on the many-universe theory and “collapses” into the desired model upon measurement. Since then, more and more articles have been published in journals of computer science as well as quantum physics in order to find a superior quantum neural network model. A lot of proposals attempt to find a quantum equivalent for the perceptron unit from which neural nets are constructed. A problem is that nonlinear activation functions do not immediately correspond to the mathematical structure of quantum theory, since a quantum evolution is described by linear operations and leads to probabilistic observation. Ideas to imitate the perceptron activation function with a quantum mechanical formalism reach from special measurements to postulating non-linear quantum operators (a mathematical framework that is disputed). A direct implementation of the activation function using the circuit-based model of quantum computation has recently been proposed by Schuld, Sinayskiy and Petruccione based on the quantum phase estimation algorithm. A substantial amount of interest has been given to a “quantum-inspired” model that uses ideas from quantum theory to implement a neural network based on fuzzy logic. Some contributions reverse the approach and try to exploit the insights from neural network research in order to obtain powerful applications for quantum computing, such as quantum algorithmic design supported by machine learning. An example is the work of Elizabeth Behrman and Jim Steck, who propose a quantum computing setup that consists of a number of qubits with tunable mutual interactions. Following the classical backpropagation rule, the strength of the interactions are learned from a training set of desired input-output relations, and the quantum network thus ‘learns’ an algorithm. The quantum associative memory algorithm has been introduced by Dan Ventura and Tony Martinez in 1999. The authors do not attempt to translate the structure of artificial neural network models into quantum theory, but propose an algorithm for a circuit-based quantum computer that simulates associative memory. The memory states (in Hopfield neural networks saved in the weights of the neural connections) are written into a superposition, and a Grover-like quantum search algorithm retrieves the memory state closest to a given input. An advantage lies in the exponential storage capacity of memory states, however the question remains whether the model has significance regarding the initial purpose of Hopfield models as a demonstration of how simplified artificial neural networks can simulate features of the brain. Rinkus proposes that distributed representation, specifically sparse distributed representation (SDR), provides a classical implementation of quantum computing. Specifically, the set of SDR codes stored in an SDR coding field will generally intersect with each other to varying degrees. In other work, Rinkus describes a fixed time learning (and inference) algorithm that preserves similarity in the input space into similarity (intersection size) in the SDR code space. Assuming that input similarity correlates with probability, this means that any single active SDR code is also a probability distribution over all stored inputs, with the probability of each input measured by the fraction of its SDR code that is active (i.e., the size of its intersection with the active SDR code). The learning/inference algorithm can also be viewed as a state update operator and because any single active SDR simultaneously represents both the probability of the single input, X, to which it was assigned during learning and the probabilities of all other stored inputs, the same physical process that updates the probability of X also updates all stored probabilities. By 'fixed time', it is meant that the number of computational steps comprising this process (the update algorithm) remains constant as the number of stored codes increases. This theory departs radically from the standard view of quantum computing and quantum physical theory more generally: rather than assuming that the states of the lowest level entities in the system, i.e., single binary neurons, exist in superposition, it assumes only that higher-level, i.e., composite entities, i.e., whole SDR codes (which are sets of binary neurons), exist in superposition.

[ "Quantum computer" ]
Parent Topic
Child Topic
    No Parent Topic