Computers use quantum mechanics to work

Quantum computers - what is already feasible today

Michael Marthaler

The special thing about a quantum computer is not only its extremely fast computing power. In order to be able to use it sensibly, you also need problems with special properties: you have to be able to apply the laws of quantum mechanics to solve these problems.

EnlargeDo we already have the problems that we could solve with a quantum computer?
© Fotolia / Jürgen Fälchle

For a long time the quantum computer was only a subject of basic research, but in the last two years the topic has also arrived in industry. Numerous large companies such as Google, IBM and Intel are working on the implementation of a quantum computer.

The quantum computer allows various special problems to be solved extremely much faster than would be possible with conventional computers. However, a quantum computer cannot simply be viewed as a faster computer. The problems that a quantum computer can solve must have specific properties that allow the laws of quantum mechanics to be used in order to achieve a solution more quickly.

The basic unit of the quantum computer is the quantum mechanical bit, or qubit for short. While the conventional bit on which our current computers are based takes either the value 0 or 1, a qubit can take both values ​​at the same time.

Like many things in quantum mechanics, this may sound very mysterious, but you can think of it like an arrow moving on the surface of a sphere. With a bit, the arrow can only point up or down. With a qubit, the arrow can point in any direction.

This property is actually nothing special. The properties of the quantum computer, which allow an immense acceleration, only become apparent when one looks at many qubits.

In fact, the representation of the bit and the qubit as an arrow is well suited to illustrate one of the major problems of the quantum computer. If a disturbance changes the direction of the arrow slightly in a classic bit, the essential information, i.e. up or down, is retained. This means that the direction can still be assigned to the value one or zero.

EnlargeQuantum computers are very error-prone

A qubit should be able to point in any direction. Accordingly, any disturbance that changes the direction of the arrow automatically creates an error in the state of the qubit. Accordingly, quantum computers are fundamentally more prone to errors than is the case with conventional computers.

The advantage of the quantum computer arises as soon as we consider many bits with many qubits. If we have 8 bits, they can store exactly one number between 0 and 255. 8 qubits, on the other hand, can be in all 256 possibilities at the same time. If a qubit is added, there are already 1024 possibilities. With another qubit we already have 2048 possibilities. Only 34 qubits are enough to display more than 10 billion values ​​at the same time.

If an operation is then carried out on the qubits, it is carried out on all states simultaneously - i.e. in the case of 34 qubits directly on 10 billion states simultaneously.

This is the fundamental reason why the quantum computer can achieve a huge acceleration for certain problems. However, it is very difficult to find algorithms that take advantage of this acceleration. This is because a measurement of the qubits leads to the fact that the state is reduced to only one of the possible values ​​or the state is projected onto one of the possibilities, as one would express it in quantum mechanics.

That is why most algorithms work in such a way that first the entire space of possibilities is spanned and then states are removed again by means of interference, so that at the end only a number is left. This then corresponds to the result of the calculation.

The basic concepts of the quantum computer were already known in the early 1980s. But the first algorithms weren't found until the early 90's.

The first sensational algorithm that solves a problem of far-reaching importance was introduced by Peter Shor in 1995. The famous Shor's algorithm allows the efficient decomposition of a number into two prime numbers.

This algorithm would undermine all of the widely used cryptographic methods currently in use, because all of these methods rely on the difficulty of prime numbering.

It would take a conventional supercomputer approximately 150,000 years to prime a number with 300 digits. The time it would take a quantum computer to do the same job is difficult to estimate, as it depends on the processor architecture. But in principle a quantum computer should be able to perform the equivalent prime number decomposition in a few days.

Together with a second major development, Shor's algorithm sparked massive interest in the implementation of the quantum computer in basic research.

The second essential finding was the solution to the quantum computer's susceptibility to errors. It was shown that it is possible to network many qubits to form a so-called logical qubit. The redundancy created makes it possible to reduce the error rate of the logical qubit as required. For this it is necessary that the error probability of an individual qubit falls below a certain limit value.

EnlargeCorrection of errors in qubits

This limit was reached in 2015 with qubits made of superconducting aluminum. This means that there is no longer any fundamental obstacle to realizing the quantum computer.

Now it is necessary to integrate more and more qubits on a processor so that enough logical qubits are available to solve meaningful tasks. With the methods currently used, between 1000 and 10000 qubits are required to achieve an error-free logical qubit. Accordingly, at least one million qubits are required to implement 100 logical qubits.

The integration of such a large number of qubits should be fundamentally possible.

Similar integration densities are also achieved with conventional processors. But given that the largest chips introduced by Intel and IBM contain around 50 qubits, it is clear that there is still a long way to go.

It should also be noted that it has not yet been shown that the qubits also have error probabilities that fall below the limit value for error correction.

The D-Wave company already offers processors with more than 2000 qubits, but these qubits have extremely high error probabilities that are significantly greater than the limit value for error correction.

The main question that arises at the moment for the use of the quantum computer is whether applications with 50 to 100 faulty qubits are possible or whether the age of the quantum computer only dawns with complete error correction.

It is undoubtedly clear that Shor's algorithm needs an error correction, and thus, even with an optimistic estimate, it will certainly take at least 10 years before the existing cryptographic technology becomes obsolete.

However, there are highly specialized algorithms that can already show an advantage for quantum computers with 50 faulty qubits. This involves calculations that can be carried out with 50 qubits and for which at least a complete large-scale computer center would be required if one wanted to carry out these calculations with conventional computers.

However, there is no known application for these algorithms. One of the most efficient applications for the quantum computer is the simulation of molecules and materials.

On an atomic basis, the properties of all materials are described by the laws of quantum mechanics. This makes such a simulation almost impossible on conventional computers. At the same time, such simulations can be carried out very efficiently on a quantum computer. Since the number of operations then remains relatively small, the number of errors also remains small.

Simulation of molecules can be of great value in the chemical and pharmaceutical industries. It seems quite possible that quantum computers can be used in this area without error correction.

There are also numerous activities to improve quantum algorithms in the field of artificial intelligence in order to be able to use them without error correction. But here it will take some significant scientific breakthroughs to reduce the number of operations necessary to tolerate errors during computation.