User:Irvings1/Quantum algorithm for linear systems of equations
The Quantum algorithm for linear systems of equations, designed by Aram Harrow, Avinatan Hassidim, and Seth Lloyd, is a quantum algorithm for solving linear systems formulated in 2009. The algorithm estimates the result of a scalar measurement on the solution vector to a given linear system of equations.[1]
The algorithm is one of the main fundamental algorithms expected to provide an exponential speedup over their classical counterparts, along with Shor's factoring algorithm and Feyman's quantum simulation algorithm. Provided the linear system is an sparse and has a low condition number , and the user is interested in the result of a scalar measurement on the solution vector, and not the values of the solution vector itself, the algorithm has a runtime of O(logN*k^2), offering an exponential speedup over the fastest classical algorithm, which runs in O(N*sqrt(k)), where is the number of variables in the linear system.
An implementation of the quantum algorithm for linear systems of equations was first demonstrated in 2013 by both Barz et al. and Pan et al in parallel. The demonstrations consisted of simple linear equations on specially designed quantum devices.[2][3]
Due to the prevalence of linear systems in virtually all areas of science and engineering, the quantum algorithm for linear systems of equations has the potential to be the most practically useful quantum algorithms conceived so far. This is an important milestone for quantum computing, as previous algorithms offering exponential speedup do not have versatility or obvious real-world applications.[4]
Procedure
[edit]The problem we are trying to solve is: given a Hermitian matrix and a unit vector , find the solution vector satisfying . This algorithm assumes that the user is not interested in the values of itself, but rather the result of applying some operator onto x, .
First, the algorithm represents the vector as a quantum state of the form:
- .
Next, Hamiltonian simulation techniques are used to apply the unitary operator to for a superposition of different times . The ability to decompose into the eigenbasis of and to find the corresponding eigenvalues is facilitated by the use of quantum phase estimation.
The state of the system after this decomposition is approximately:
- ,
where is the eigenvector basis of , and .
We would then like to perform the linear map taking to , where is a normalizing constant. The linear mapping operation is not unitary and thus will require a number of repetitions as it has some probability of failing. After it succeeds, we uncompute the register and are left with a state proportional to:
- ,
Where is a quantum-mechanical representation of the desired solution vector . To read out all components of x would require the procedure be repeated at least times. However, it is often the case that one is not interested in itself, but rather some expectation value of a linear operator M acting on x. By mapping M to a quantum-mechanical operator and performing the quantum measurement corresponding to M, we obtain an estimate of the expectation value . This allows for a wide variety of features of the vector x to be extracted including normalization, weights in different parts of the state space, and moments without actually computing all the values of the solution vector x.
Explanation of the Algorithm
[edit]Initialization
[edit]Firstly, the algorithm requires that the matrix be Hermitian so that it can be converted into a unitary operator. In the case were is not Hermitian, define
As is Hermitian, the algorithm can now be used to solve to obtain .
Secondly, The algorithm requires an efficient procedure to prepare , the quantum representation of b. It is assumed that there exists some linear operator that can take some arbitrary quantum state to efficiently or that this algorithm is a subroutine in a larger algorithm and is given as input. Any error in the preparation of state is ignored.
Finally, the algorithm assumes that the state can be prepared efficiently. Where
for some large . The coefficients of are chosen to minimize a certain quadratic loss function which induces error in the subroutine described below.
Phase Estimation
[edit]Phase estimation is used to transform the Hermitian matrix into a unitary operator, which can then be applied at will. This is possible if A is s-sparse and efficiently row computable, meaning it has at most s nonzero entries per row and given a row index these entries can be computed in time O(s). Under these assumptions, quantum phase estimation allows to be simulated in time .
Uinvert Subroutine
[edit]The key subroutine to the algorithm, denoted , is defined as follows:
1. Prepare on register C
2. Apply the conditional Hamiltonian evolution (sum)
3. Apply the Fourier transform to the register C. Denote the resulting basis states with for k = 0,...,T-1. Define .
4. Adjoin a three-dimensional register S in the state
- ,
5. Reverse steps 1-3, uncomputing any garbage produced along the way.
where functions f, g, are filter functions. The states 'nothing', 'well' and 'ill' are used to instruct the loop body on how to proceed; 'nothing' indicates that the desired matrix inversion has not yet taken place, 'well' indicates that the inversion has taken place and the loop should halt, and 'ill' indicates that part of is in the ill-conditioned subspace of A and the algorithm will not be able to produce the desired inversion.
Main Loop
[edit]The body of the algorithm follows the amplitude amplification procedure: starting with , the following operation is repeatedly applied:
Where
- ,
and
- ,
After each repetition, is measured and will produce a value of 'nothing', 'well', or 'ill' as described above. This loop is repeated until is measured, which occurs with a probability . Rather than repeating times to minimize error, amplitude amplification is used to achieve the same error resilience using only repetitions.
Scalar Measurement
[edit]After successfully measuring 'well' on the system will be in a state proportional to:
- .
Finally, we perform the quantum-mechanical operator corresponding to M and obtain an estimate of the value of .
Run Time Analysis
[edit]Classical Efficiency
[edit]The best classical algorithm which produces the actual solution vector is Gaussian Elimination, which runs in time.
If A is s-sparse, where s is significantly smaller than N, then the Conjugate Gradient method can be used to find the solution vector can be found in time by minimizing the quadratic function .
When only a summary statistic of the solution vector is needed, as is the case for the quantum algorithm for linear systems of equations, a classical computer can find an estimate of in .
Quantum Efficiency
[edit]The quantum algorithm for solving linear systems of equations originally proposed by Harrow et. al was shown to be . The runtime of this algorithm was subsequently improved to by Andris Ambainis.[5]
Optimality
[edit]An important factor in the performance of the matrix inversion algorithm is the condition number of , which represents the ratio of 's largest and smallest eigenvalues. As the condition number increases, the ease with which the solution vector can be found using gradient descent methods such as the conjugate gradient method decreases, as becomes closer to a matrix which cannot be inverted and the solution vector becomes less stable. This algorithm assumes that all elements of the matrix lie between and 1, in which case the claimed run-time proportional to will be achieved. Therefore, the speedup over classical algorithms is increased further when is a .[1]
If the run-time of the algorithm were made poly-logarithmic in then problems solvable on n qubits could be solved in poly(n) time, causing the complexity class BQP to be equal to PSPACE.[1]
Error Analysis
[edit]Performing the phase estimation, which is the dominant source of error, is done by simulating . Assuming that is s-sparse, this can be done with an error bounded by a constant , which will translate to the additive error achieved in the output state .
The phase estimation step errs by in estimating , which translates into a relative error of in . If , taking induces a final error of . This requires that the overall run-time efficiency be increased proportional to to minimize error.
Experimental Realization
[edit]While there does not yet exist a quantum computer that can truly offer a speedup over a classical computer, implementation of a "proof of concept" remains an important milestone in the developement of a new quantum algorithm. Demonstrating the quantum algorithm for linear systems of equations remained a challenge for years after its proposal until 2013 when it was demonstrated by both Barz et al. and Pan et al. in parallel.
Barz et al.
[edit]On February 5, 2013 Barz et al. demonstrated the quantum algorithm for linear systems of equations on a photonic quantum computing architecture. This implementation used two consecutive entangling gates ont he same pair of polarization-encoded qubits. Two separately controlled-NOT gates were realized where the successful operation of the first was heralded by a measurement of two ancillary photons. Barz et al. found that the fidelity in the obtained output state ranged from 64.7% to 98.1% due to the influence of higher-order emissions from spontaneous parametric down-conversion.[2]
Pan et al.
[edit]On February 8, 2013 Pan et. al reported a proof-of-concept experimental demonstration of the quantum algorithm using a 4-qubit nuclear magnetic resonance quantum information processor. The implementation was tested using simple linear systems of only 2 variables. Across three experiments they obtain the solution vector with over 96% fidelity.[3]
Applications
[edit]Quantum computers are devices that harness quantum mechanics to perform computations in ways that classical computers cannot. For certain problems, quantum algorithms supply exponential speedups over their classical counterparts, the most famous example being Shor's factoring algorithm. Few such exponential speedups are known, and those that are (such as the use of quantum computers to simulate other quantum systems) have so far found limited use outside the domain of quantum mechanics. This algorithm provides an exponentially faster method of estimating features of the solution of a set of linear equations, which is a problem ubiquitous in science an engineering, both on its own and as a subroutine in more complex problems.
Linear Differential Equation Solving
[edit]Domic Berry proposed a new algorithm for solving linear time dependent differential equations as an extension of the quantum algorithm for solving linear systems of equations. Berry provides an effcient algorithm for solving the full time evolution under sparse linear differential equations on a quantum computer. [6]
Least-Squares Fitting
[edit]Wiebe et al. provide a new quantum algorithm to determine the quality of a least-squares fit in which a continuous function is used to approximate a set of discrete points by extending the quantum algorithm for linear systems of equations. As the amount of discrete points increases, the time required to produce a least-squares fit using even a quantum computer running a quantum state tomography algorithm becomes very large. Wiebe et al. find that in many cases, their algorithm can efficiently find a concise approximation of the data points, eliminating the need for the higher-complexity tomography algorithm.[7]
Machine Learning and Big Data Analysis
[edit]Machine learning is the study of systems that can identify trends in data. Tasks in machine learning frequently nivolve manipulating and classifying a large volume of data in high-dimensional vector spaces. The runtime of classical machine learning algorithms is limited by a polynomial dependance on both the volume of data and the dimensions of the space. Quantum computers are capable of manipulating high-dimensional vectors using tensor product spaces are are thus the perfect platform for machine learning algorithms.[8]
The quantum algorithm for linear systems of equations has been applied to a support vector machine, which is an optimized linear or non-linear binary classifier. A support vector machine can be used for supervised machine learning, in which training set of already classified data is available, or unsupervised machine learning, in which all data given to the system is unclassified. Rebentrost et al. show that a quantum support vector machine can be used for big data classification and achieve an exponential speedup over classical computers. [9]
References
[edit]- ^ a b c Quantum algorithm for solving linear systems of equations, by Harrow et al..
- ^ a b Solving systems of linear equations on a quantum computer by Barz et al.. Cite error: The named reference "Solving systems of linear equations on a quantum computer by Barz et al." was defined multiple times with different content (see the help page).
- ^ a b Experimental realization of quantum algorithm for solving linear systems of equations, by Pan et al..
- ^ Quantum Computer Runs The Most Practically Useful Quantum Algorithm, by Lu and Pan.
- ^ [ http://arxiv.org/abs/1010.4458 Variable time amplitude amplification and a faster quantum algorithm for solving systems of linear equations by Adris Ambainis].
- ^ [ http://arxiv.org/abs/1010.2745 High-order quantum algorithm for solving linear differential equations by Dominic Berry].
- ^ Quantum Data Fitting by Wiebe et al..
- ^ Quantum algorithms for supervised and unsupervised machine learning, by Lloyd et al..
- ^ Quantum support vector machine for big feature and big data classification, by Rebentrost et al..
Category:Quantum algorithms
Category:Integer factorization algorithms
Category:Quantum information science
Category:Articles containing proofs