Randomized benchmarking (RB) protocols have become an essential tool for providing a meaningful partial characterization of experimental quantum operations. While the RB decay rate is known to enable estimates of the average fidelity of those operations under gate-independent Markovian noise, under gate-dependent noise this rate is more difficult to interpret rigorously. In this paper, we prove that single-qubit RB decay parameter $p$ coincides with the decay parameter of the gate-set circuit fidelity, a novel figure of merit which characterizes the expected average fidelity over arbitrary circuits of operations from the gate-set. We also prove that, in the limit of high-fidelity single-qubit experiments, the possible alarming disconnect between the average gate fidelity and RB experimental results is simply explained by a basis mismatch between the gates and the state-preparation and measurement procedures, that is, to a unitary degree of freedom in labeling the Pauli matrices. Based on numerical evidence and physically motivated arguments, we conjecture that these results also hold for higher dimensions.
The question of how irreversibility can emerge as a generic phenomenon when the underlying mechanical theory is reversible has been a long-standing fundamental problem for both classical and quantum mechanics. We describe a mechanism for the appearance of irreversibility that applies to coherent, isolated systems in a pure quantum state. This equilibration mechanism requires only an assumption of sufficiently complex internal dynamics and natural information-theoretic constraints arising from the infeasibility of collecting an astronomical amount of measurement data. Remarkably, we are able to prove that irreversibility can be understood as typical without assuming decoherence or restricting to coarse-grained observables, and hence occurs under distinct conditions and time scales from those implied by the usual decoherence point of view. We illustrate the effect numerically in several model systems and prove that the effect is typical under the standard random-matrix conjecture for complex quantum systems.
We present experimental results on the measurement of fidelity decay under contrasting system dynamics using a nuclear magnetic resonance quantum information processor. The measurements were performed by implementing a scalable circuit in the model of deterministic quantum computation with only one quantum bit. The results show measurable differences between regular and complex behavior and for complex dynamics are faithful to the expected theoretical decay rate. Moreover, we illustrate how the experimental method can be seen as an efficient way for either extracting coarse-grained information about the dynamics of a large system or measuring the decoherence rate from engineered environments.
In close analogy to the fundamental role of random numbers in classical information theory, random operators are a basic component of quantum information theory. Unfortunately, the implementation of random unitary operators on a quantum processor is exponentially hard. Here we introduce a method for generating pseudo-random unitary operators that can reproduce those statistical properties of random unitary operators most relevant to quantum information tasks. This method requires exponentially fewer resources, and hence enables the practical application of random unitary operators in quantum communication and information processing protocols. Using a nuclear magnetic resonance quantum processor, we were able to realize pseudorandom unitary operators that reproduce the expected random distribution of matrix elements.
Simulating quantum circuits classically is an important area of research in quantum information, with applications in computational complexity and validation of quantum devices. One of the state-of-the-art simulators, that of Bravyi et al, utilizes a randomized sparsification technique to approximate the output state of a quantum circuit by a stabilizer sum with a reduced number of terms. In this paper, we describe an improved Monte Carlo algorithm for performing randomized sparsification. This algorithm reduces the runtime of computing the approximate state by the factor $\ell/m$, where $\ell$ and $m$ are respectively the total and non-Clifford gate counts. The main technique is a circuit recompilation routine based on manipulating exponentiated Pauli operators. The recompilation routine also facilitates numerical search for Clifford decompositions of products of gates, which can further reduce the runtime in certain cases. It may additionally lead to a framework for optimizing circuit implementations over a gate set, reducing the overhead for state-injection in fault-tolerant implementations. We provide a concise exposition of randomized sparsification, and describe how to use it to estimate circuit amplitudes in a way which can be generalized to a broader class of gates and states. This latter method can be used to obtain additive error estimates of circuit probabilities with a faster runtime than the full techniques of Bravyi et al. Such estimates are useful for validating near-term quantum devices provided that the target probability is not exponentially small.
Quantum computers are inhibited by physical errors that occur during computation. For this reason, the development of increasingly sophisticated error characterization and error suppression techniques is central to the progress of quantum computing. Error distributions are considerably influenced by the precise gate scheduling across the entire quantum processing unit. To account for this holistic feature, we may ascribe each error profile to a (clock) cycle, which is a scheduled list of instructions over an arbitrarily large fraction of the chip. A celebrated technique known as randomized compiling introduces some randomness within cycles' instructions, which yields effective cycles with simpler, stochastic error profiles. In the present work, we leverage the structure of cycle benchmarking (CB) circuits as well as known Pauli channel estimation techniques to derive a method, which we refer to as cycle error reconstruction (CER), to estimate with multiplicative precision the marginal error distribution associated with any effective cycle of interest. The CER protocol is designed to scale for an arbitrarily large number of qubits. Furthermore, we develop a fast compilation-based calibration method, referred to as stochastic calibration (SC), to identify and suppress local coherent error sources occurring in any effective cycle of interest. We performed both protocols on IBM-Q 5-qubit devices. Via our calibration scheme, we obtained up to a 5-fold improvement of the circuit performance.
We know that quantum mechanics enables the performance of computational and cryptographic tasks that are impossible (or impracticable) using only classical physics. It seems natural to examine manifestations of inherently quantum behaviour as potential sources of these capabilities. Here [1] we establish that quantum contextuality, a generalization of nonlocality identified by Bell [2] and Kochen-Specker [3] almost 50 years ago, is a critical resource for quantum speed-up within the leading model for fault-tolerant quantum computation, known as magic state distillation (MSD) [4], [5].We consider the framework of fault-tolerant stabilizer quantum computation, which provides the most promising route to achieving robust universal quantum computation thanks to the discovery of high-threshold codes in two-dimensional geometries. In this framework, only a subset of quantum operations -- namely, stabilizer operations -- can be achieved via a fault-tolerant encoding. These operations define a closed subtheory (i.e. sets of states, transformations and measurements) of quantum theory -- the stabilizer subtheory -- which is not universal and in fact admits an efficient classical simulation. The stabilizer subtheory can be promoted to universal quantum computation through MSD which relies on a large number of ancillary resource states. Using d-level (where d is an odd prime) quantum systems -- qudits -- as the fundamental unit of information leads to mathematical and conceptual simplifications [6] (as well as improved efficiency and thresholds for the MSD subroutine). The term stabilizer refers to the fact that the states arising in our subtheory are simultaneous eigenstates of elements of the finite Heisenberg-Weyl (or, equivalently, the generalized Pauli-) group [7], [8], [9].If the input states to an MSD subroutine are unsuitable, then the overall computation remains classically efficiently simulable. We show that quantum contextuality plays a critical role in characterizing the suitability of quantum states for MSD. Our approach builds on recent work [10] that has established a remarkable connection between contextuality and graph-theory. We use this combinatorial framework to identify non-contextuality inequalities such that the onset of state-dependent contextuality, using stabilizer measurements, coincides exactly with the possibility of universal quantum computing via MSD.
This project report summarizes the results of monitoring a post-tensioned spliced girder bridge in Salt Lake City. This report describes the monitoring of the 4500 South Bridge on Interstate 15. The north-bound bridge consists of eight post-tensioned, spliced, precast concrete girders, having three segments each, for a single clear span of 61.443 m (201 ft 7 in.). Four girders and portions of the bridge deck and parapet wall have been instrumented and monitored for approximately four years. Data recorded from the bridge included concrete strain at selected girder locations, as well as post-tensioned girder losses through eight load cells, and girder deflections for one of the girders through surveys. The actual losses at midspan in the girder being monitored, including time-dependent losses and anchorage sealing and friction losses, were on average 14.5% of the initial post-tensioning forces; the absolute upward midspan deflection was 0.15% of the clear span, and the two splice points were deflecting in an almost identical manner indicating excellent girder/splice performance. Analytical procedures are compared to experimental measurements of the losses of the post-tensioned spliced precast concrete girder being monitored. The assumed losses in design were very close to those observed and the design methodology for incorporating losses is found to be adequate. Shrinkage and creep tests, performed on concrete used in constructing the post-tensioned spliced, precast concrete girders, were used to obtain the ultimate creep coefficient and ultimate shrinkage strain. Linear Variable Differential Transducer (LVDT) measurements at the cold joints show that the cold joints are in good health. The abutment movements and rotations were found to be small. The vertical deflections of the post-tensioned girders were measured due to thermal gradients and were compared to American Association of State Highway and Transportation Officials (AASHTO) predicted deflections. General recommendations for using spliced-girder post-tensioned bridges in future projects are provided.