Abstract We develop an algorithmic framework for contracting tensor networks and demonstrate its power by classically simulating quantum computation of sizes previously deemed out of reach. Our main contribution, index slicing, is a method that efficiently parallelizes the contraction by breaking it down into much smaller and identically structured subtasks, which can then be executed in parallel without dependencies. We benchmark our algorithm on a class of random quantum circuits, achieving greater than 10 5 times acceleration over the original estimate of the simulation cost. We then demonstrate applications of the simulation framework for aiding the development of quantum algorithms and quantum error correction. As tensor networks are widely used in computational science, our simulation framework may find further applications.
Leveraging parallelization in time, a scalable decoding paradigm for topological quantum error-correction codes effectively resolves the exponential backlog problem without compromising decoding accuracy.
We report, in a sequence of notes, our work on the Alibaba Cloud Quantum Development Platform(AC-QDP). AC-QDP provides a set of tools for aiding the development of both quantum computing algorithms and quantum processors, and is powered by a large-scale classical simulator deployed on Alibaba Cloud. In this note, we report the computational experiments demonstrating the classical simulation capability of AC-QDP. We use as a benchmark the random quantum circuits designed for Google's Bristlecone QPU {\cite{GRCS}}. We simulate Bristlecone-70 circuits with depth $1 + 32 + 1$ in $0.43$ second per amplitude, using $1449$ Alibaba Cloud Elastic Computing Service (ECS) instances, each with $88$ Intel Xeon(Skylake) Platinum 8163 vCPU cores @ 2.5 GHz and $160$ gigabytes of memory. By comparison, the previously best reported results for the same tasks are $104$ and $135$ seconds, using NASA's HPC Pleiades and Electra systems, respectively ({arXiv:1811.09599}). Furthermore, we report simulations of Bristlecone-70 with depth $1+36+1$ and depth $1+40+1$ in $5.6$ and $580.7$ seconds per amplitude, respectively. To the best of our knowledge, these are the first successful simulations of instances at these depths.
As the rapid growth of open source software, how to choose software from many alternatives becomes a great challenge. Traditional ranking approaches mainly focus on the characteristics of the software themselves, such as qualities, security, reliable and so on. In this paper we investigate the market demands for software engineers, and propose a novel approach for ranking software by analyzing the market requirements for special software. At the same time we conclude the characteristics of software advertisements and analyze the reasons that why these situations emerge and tendency of software market requirements. As industries always need to balance several different factors for selecting software, the market demands can be a good indicator for ranking software and software evaluating. This paper provides quite a different perspective and some interesting inferences on software market requirements, and it can be a valuable supplement for traditional ranking methods, as well as software evaluating.
For existing problems in visual tracking and behavior recognition, we propose a novel method to track and recognize based on gray prediction in this paper.We firstly use the background subtraction method to detect moving target and use cross entropy method to process image binaryzation.After that, the morphology filter is used to eliminate the noise, in this way we extract the template whose size is determined by the contour segmentation.The improved gray prediction employs GM(1,1) model to reduce the prediction scope, such that human body's motion trajectory can be recognized.Through the tracking and recording of human moving trajectory, we can identify moving human behavior, like jumping, tumble and squat.Experimental results prove that the proposed algorithm can recognize jumping, tumble, squat and other common human motion behavior correctly as well as very robust.
We introduce a distributed classical simulation algorithm for general quantum circuits, and present numerical results for calculating the output probabilities of universal random circuits. We find that we can simulate more qubits to greater depth than previously reported using the cluster supported by the Data Infrastructure and Search Technology Division of the Alibaba Group. For example, computing a single amplitude of an $8\times 8$ qubit circuit with depth $40$ was previously beyond the reach of supercomputers. Our algorithm can compute this within $2$ minutes using a small portion ($\approx$ 14% of the nodes) of the cluster. Furthermore, by successfully simulating quantum supremacy circuits of size $9\times 9\times 40$, $10\times 10\times 35 $, $11\times 11\times 31$, and $12\times 12\times 27 $, we give evidence that noisy random circuits with realistic physical parameters may be simulated classically. This suggests that either harder circuits or error-correction may be vital for achieving quantum supremacy from random circuit sampling.
It is believed that random quantum circuits are difficult to simulate classically. These have been used to demonstrate quantum supremacy: the execution of a computational task on a quantum computer that is infeasible for any classical computer. The task underlying the assertion of quantum supremacy by Arute et al. (Nature, 574, 505--510 (2019)) was initially estimated to require Summit, the world's most powerful supercomputer today, approximately 10,000 years. The same task was performed on the Sycamore quantum processor in only 200 seconds. In this work, we present a tensor network-based classical simulation algorithm. Using a Summit-comparable cluster, we estimate that our simulator can perform this task in less than 20 days. On moderately-sized instances, we reduce the runtime from years to minutes, running several times faster than Sycamore itself. These estimates are based on explicit simulations of parallel subtasks, and leave no room for hidden costs. The simulator's key ingredient is identifying and optimizing the "stem" of the computation: a sequence of pairwise tensor contractions that dominates the computational cost. This orders-of-magnitude reduction in classical simulation time, together with proposals for further significant improvements, indicates that achieving quantum supremacy may require a period of continuing quantum hardware developments without an unequivocal first demonstration.
The continuous development of computing and communication technologies, as well as the emergence of new applications such as autonomous driving, augmented reality, and industrial Internet of Things, have posed significant challenges to the computing and storage capabilities of terminal devices. We need a new computing paradigm to provide high-speed and low-latency computing services to meet the needs of these new applications. While cloud computing technology offers abundant computing power, it often fails to meet the latency requirements of these applications due to the long distance between cloud servers and terminals. To address this issue, the network paradigm of edge computing has been introduced. One of the key problems in edge computing applications is how to offload tasks generated by terminals effectively. Previous research has shown that reinforcement learning methods are effective approaches to tackle computation offloading. In this article, we provide a comprehensive survey and summary of the application of reinforcement learning/deep reinforcement learning in the context of task offloading and analyze possible future directions.