A Domain Adaptation Method using Light-Weight Neural ODE for Low-Cost FPGAs
0
Citation
0
Reference
20
Related Paper
Keywords:
Ode
Cite
Cellular neural network
Cite
Citations (39)
This paper discusses implementation issues of FPGA and ANN based PID controllers. FPGA-based reconfigurable computing architectures are suitable for hardware implementation of neural networks. FPGA realization of ANNs with a large number of neurons is still a challenging task. This paper discusses the issues involved in implementation of a multi-input neuron with linear/nonlinear excitation functions using FPGA. It also suggests advantages of error self-recurrent neural networks over back propagation neural network.
Realization (probability)
Backpropagation
Cite
Citations (8)
In this paper, a design method of neural networks based on VHDL hardware description language, and FPGA implementation is proposed. A design of a general neuron for topologies using backpropagation algorithm is described. The sigmoid nonlinear activation function is also implemented. The neuron is then used in the design and implementation of a neural network using Xilinx Spartan-3e FPGA. The simulation results obtained with Xilinx ISE 8.2i software. The results are analyzed in terms of operating frequency and chip utilization. Key words : Artificial, Neural , Network, Backprobagation, FPGA,VHDL.
Backpropagation
Sigmoid function
Activation function
FPGA prototype
Cite
Citations (6)
In this paper, a hardware implementation of an artificial neural network on Field Programmable Gate Arrays (FPGA) is done step by step. First single neurons are implemented using different activation function and then a XOR Gate is implemented. The concurrent structure of a neural network makes it very fast for the computation of certain tasks. This makes ANN well suited for implementation in VLSI technology. Hardware realization of a Neural Network depends on the efficient implementation of Artificial single neuron. FPGA-based reconfigurable computing architectures are suitable for hardware implementation of neural networks. FPGA realization of ANNs with a large number of neurons is still a challenging task. Here design and implementation of a Single neuron with various activation functions is done and then an XOR Gate is implemented using one of these designs. The design of XOR Gate is done by using gradient descent algorithm and taking 3-layers 5-neurons model.
XOR gate
Activation function
Realization (probability)
Gate array
Cite
Citations (10)
The present paper documents the research towards the development of an efficient algorithm to compute the result from a multiple-input-single-output Neural Network using floating-point arithmetic on FPGA. The proposed algorithm focus on optimizing pipeline delays by splitting the "Multiply and accumulate" algorithm into separate steps using partial products. It is a revisit of the classical algorithm for NN computation, able to overcome the main computation bottleneck in FPGA environment. The proposed algorithm can be implemented into an architecture that fully exploits the pipeline performance of the floating-point arithmetic blocks, thus allowing a very fast computation for the neural network. The performance of the proposed architecture is presented using as target a Cyclone II FPGA Device.
Cite
Citations (24)
Ode
Cite
Citations (0)
Ode
Cite
Citations (0)
There has been a body of research to use stochastic computing (SC) for the implementation of neural networks, in the hope that it will reduce the area cost and energy consumption. However, no working neural network system based on stochastic computing has been demonstrated to support the viability of SC-based deep neural networks in terms of both recognition accuracy and cost/energy efficiency. In this demonstration we present an SC-based deep nenural network system that is highly accurate and efficient. Our system takes an input image and processes it with a convolutional neural network implemented on an FPGA using stochastic computing to recognize the input image, with nearly the same accuracy as conventional binary implementations.
Stochastic Computing
Implementation
Cite
Citations (9)
Competitive majority network trained by error correction (C-Mantec), a recently proposed constructive neural network algorithm that generates very compact architectures with good generalization capabilities, is implemented in a field programmable gate array (FPGA). A clear difference with most of the existing neural network implementations (most of them based on the use of the backpropagation algorithm) is that the C-Mantec automatically generates an adequate neural architecture while the training of the data is performed. All the steps involved in the implementation, including the on-chip learning phase, are fully described and a deep analysis of the results is carried on using the two sets of benchmark problems. The results show a clear increase in the computation speed in comparison to the standard personal computer (PC)-based implementation, demonstrating the usefulness of the intrinsic parallelism of FPGAs in the neurocomputational tasks and the suitability of the hardware version of the C-Mantec algorithm for its application to real-world problems.
Benchmark (surveying)
Backpropagation
Constructive
Cite
Citations (38)
Artificial neural networks have been used in applications that require complex procedural algorithms and in systems which lack an analytical mathematic model. By designing a large network of computing nodes based on the artificial neuron model, new solutions can be developed for computational problems in fields such as image processing and speech recognition. Neural networks are inherently parallel since each neuron, or node, acts as an autonomous computational element. Artificial neural networks use a mathematical model for each node that processes information from other nodes in the same region. The information processing entails computing a weighted average computation followed by a nonlinear mathematical transformation. Some typical artificial neural network applications use the exponential function or trigonometric functions for the nonlinear transformation. Various simple artificial neural networks have been implemented using a processor to compute the output for each node sequentially. This approach uses sequential processing and does not take advantage of the parallelism of a complex artificial neural network. In this work a hardware-based approach is investigated for artificial neural network applications. A Field Programmable Gate Arrays (FPGAs) is used to implement an artificial neuron using hardware multipliers, adders and CORDIC functional units. In order to create a large scale artificial neural network, area efficient hardware units such as CORDIC units are needed. High performance and low cost bit serial CORDIC implementations are presented. Finally, the FPGA resources and the performance of a hardware-based artificial neuron are presented.
CORDIC
Physical neural network
Cite
Citations (3)