Quantum computing and quantum information. Abstract: Quantum Computing

MINISTRY OF EDUCATION OF THE RUSSIAN FEDERATION

STATE EDUCATIONAL INSTITUTION

Essay

quantum computing

Introduction

Chapter I. Basic Concepts of Quantum Mechanics

Chapter II. Basic concepts and principles of quantum computing

Chapter III. Grover's algorithm

Conclusion

Bibliography

Introduction

Imagine a computer whose memory is exponentially larger than one would expect from its apparent physical size; a computer that can operate simultaneously with an exponentially large set of inputs; a computer that performs calculations in Hilbert space, which is foggy for most of us.

Then you think of a quantum computer.

The idea of ​​a computing device based on quantum mechanics was first considered in the early 1970s and early 1980s by physicists and computer scientists such as Charles H. Bennet of the IBM Thomas J. Watson Research Center, Paul A. Benioff of the Argonne National Laboratory in Illinois, David Deutsch of the University of Oxford, and later Richard P. Feynman of the California Institute of Technology (Calte x). The idea arose when scientists became interested in the fundamental limitations of computing. They realized that if the technology continued to follow the gradual downsizing of computing networks packaged in silicon chips, the individual elements would be no larger than a few atoms. Then a problem arose, since at the atomic level the laws quantum physics and not classical. And this raised the question of whether it is possible to design a computer based on the principles of quantum physics.

Feynman was one of the first to attempt to answer this question. In 1982 he proposed a model of an abstract quantum system suitable for computation. He also explained how such a system could be a simulator in quantum physics. In other words, physicists could conduct computational experiments on such a quantum computer.

Later, in 1985, Deutsch realized that Feynman's statement could eventually lead to a general-purpose quantum computer, and he published a major theoretical paper showing that any physical process could, in principle, be simulated on a quantum computer.

Unfortunately, all they could come up with at that time was a few rather far-fetched mathematical problems, until Shor released his paper in 1994, in which he presented an algorithm for solving on a quantum computer one important problem from number theory, namely, decomposition into prime factors. He showed how a set of mathematical operations designed specifically for a quantum computer can factorize(factorize) huge numbers fantastically fast, much faster than conventional computers. It was a breakthrough that took quantum computing from an academic interest to a problem of interest to the whole world.


Chapter I . Basic concepts of quantum mechanics

At the end of the 19th century, it was widely believed among scientists that physics was a science “practically complete” and there was very little left for its complete “completion”: to explain the structure optical spectra of atoms and spectral distribution thermal radiation . Optical spectra of the atom are obtained by the emission or absorption of light (electromagnetic waves) by free or weakly bound atoms; such spectra are possessed, in particular, by monatomic gases and vapors.

thermal radiation- This is a mechanism for transferring heat between spatially separated parts of the body due to electromagnetic radiation.

However, the beginning of the 20th century led to the understanding that there could be no question of any “completeness”. It became clear that in order to explain these and many other phenomena, it was necessary to radically revise the ideas underlying physical science.

For example, starting from the wave theory of light, it turned out to be impossible to give an exhaustive explanation of the totality of optical phenomena.

When solving the problem of the spectral composition of radiation, the German physicist Max Planck in 1900 suggested that the emission and absorption of light by a substance occurs in finite portions, or quanta. At the same time, the energy photon - quantum of electromagnetic radiation(in the narrow sense - light) is determined by the expression

Where is the frequency of the emitted (or absorbed) light, and is the universal constant, now called Planck's constant.

The Dirac constant is often used

Then the quantum energy is expressed as , where

Circular frequency of radiation.

The contradictions between considering light as a stream of charged particles and as a wave led to the concept corpuscular-wave dualism.

On the one hand, the photon demonstrates the properties of an electromagnetic wave in phenomena diffraction(enveloping waves of obstacles comparable to a long wave) and interference(superposition of waves with the same frequency and with the same initial phase) on a scale comparable to the wavelength of a photon. For example, single photons passing through a double slit create an interference pattern on the screen that can be described Maxwell's equations. Nevertheless, the experiment shows that photons are emitted and absorbed entirely by objects whose dimensions are much smaller than the photon wavelength (for example, atoms), or, in general, to some approximation can be considered pointlike (for example, an electron), that is, they behave like particles - corpuscles. In the macroworld surrounding us, there are two fundamental ways of transferring energy and momentum between two points in space: the direct movement of matter from one point to another and the wave process of energy transfer without transfer of matter. All energy carriers here are strictly divided into corpuscular and wave. On the contrary, in the microcosm such division does not exist. All particles, and in particular photons, are simultaneously assigned both corpuscular and wave properties. The situation is inconspicuous. This is an objective property of quantum models.

Almost monochromatic radiation with a frequency emitted by a light source can be thought of as consisting of "packets of radiation", which we call photons. Monochromatic radiation - having a very small frequency spread, ideally one wavelength.

The propagation of photons in space is correctly described by the classical Maxwell equations. In this case, each photon is considered to be classical. train waves, defined by two vector fields - the strength of the electrostatic field and induction magnetic field. A train of waves is a series of perturbations with breaks between them. The radiation of an individual atom cannot be monochromatic, because the radiation lasts for a finite period of time, having periods of rise and fall.

It is incorrect to interpret the sum of squared amplitudes and as the energy density in the space in which the photon moves; instead, each quantity that depends squarely on the amplitude of the wave should be interpreted as a quantity proportional to the probability of some process. Let's say, it is not equal to the energy introduced by a photon into this region, but is proportional to the probability of finding a photon in this region.

The energy transferred in any place of space by a photon is always equal to . Thereby where is the probability of finding a photon in a given area, and is the number of photons.

In 1921, the Stern-Gerlach experiment confirmed that atoms have back and the fact of spatial quantization of the direction of their magnetic moments (from the English spin - rotate, spin.). Spin- own moment of momentum of elementary particles, which has a quantum nature and is not associated with the movement of a particle as a whole. When introducing the concept of spin, it was assumed that the electron can be considered as a "rotating top", and its spin as a characteristic of such rotation. Spin is also called the intrinsic angular momentum of an atomic nucleus or atom; in this case, the spin is defined as the vector sum (calculated according to the rules of addition of moments in quantum mechanics) of the spins of elementary particles that form the system, and the orbital moments of these particles, due to their motion within the system.

The spin is measured in units (reduced Planck constants, or Dirac constants) and is equal to , where J- an integer (including zero) or a half-integer positive number characteristic of each type of particles - spin quantum number, which is usually called simply spin (one of the quantum numbers). In this regard, one speaks of an integer or half-integer particle spin. However, the concepts of spin and spin quantum number should not be confused. A spin quantum number is a quantum number that determines the magnitude of the spin of a quantum system (atom, ion, atomic nucleus, molecule), i.e. its own (internal) angular momentum. The spin projection onto any fixed direction z in space can take the values J , J-1, ..., -J. Thus, a particle with spin J may be in 2J+1 spin states (at J= 1 / 2 - in two states), which is equivalent to having an additional internal degree of freedom.

The key element of quantum mechanics is heisenberg uncertainty principle, which says that it is impossible to accurately determine the position of a particle in space and its momentum at the same time. This principle explains the quantization of light, as well as the proportional dependence of the photon energy on its frequency.

The motion of a photon can be described by a system of Maxwell's equations, while the equation of motion of any other elementary particle such as an electron is described by the Schrödinger equation, which is more general.

The system of Maxwell's equations is invariant under the Lorentz transformation. Lorentz transformations in the special theory of relativity are called transformations to which space-time coordinates are subjected (x,y,z,t) each event during the transition from one inertial frame of reference to another. In fact, these transformations are transformations not only in space, like Galileo's transformations, but also in time.

Chapter II . Basic concepts and principles of quantum computing

Although computers have become smaller and much faster than before, they can do their job, the task itself remains the same: to manipulate a sequence of bits and interpret this sequence as a useful computational result. A bit is the fundamental unit of information, usually represented as 0 or 1 in your digital computer. Each classical bit is physically realized by a macroscopic physical system, such as the magnetization on a hard drive or the charge on a capacitor. For example, text composed of n characters, and stored on the hard drive of a typical computer, is described by a line from 8n zeros and ones. Here lies the fundamental difference between your classical computer and a quantum computer. While a classical computer obeys the well-understood laws of classical physics, a quantum computer is a device that exploits quantum mechanical phenomena (especially quantum interference) to implement an entirely new way of processing information.

In a quantum computer, the fundamental unit of information (called a quantum bit or qubit), is not binary, but rather quaternary in nature. This property of a qubit results as a direct consequence of its subjection to the laws of quantum mechanics, which are radically different from the laws of classical physics. A qubit can exist not only in a state corresponding to logical 0 or 1, like a classical bit, but also in states corresponding to a mixture or superpositions these classical states. In other words, a qubit can exist as zero, as one, and as both 0 and 1. In this case, you can specify a certain numerical coefficient representing the probability of being in each state.

Ideas about the possibility of building a quantum computer go back to the works of R. Feynman in 1982-1986. Considering the question of computing the evolution of quantum systems on a digital computer, Feynman discovered that this problem is "unsolvable": it turns out that the memory and speed resources of classical machines are insufficient for solving quantum problems. For example, a system from n quantum particles with two states (spins 1/2 ) It has 2 n basic states; to describe it, it is necessary to set (and write to the computer memory) 2 n amplitudes of these states. Based on this negative result, Feynman suggested that, probably, a "quantum computer" would have properties that would allow quantum problems to be solved on it.

"Classic" computers are built on transistor circuits with non-linear relationships between input and output voltages. Essentially, these are bistable elements; for example, when the input voltage is low (logic "0"), the input voltage is high (logic "1"), and vice versa. Such a bistable transistor circuit in the quantum world can be compared with a two-level quantum particle: we assign the values ​​of the logical state to the state , - boolean value. Transitions in a bistable transistor circuit here will correspond to transitions from level to level: . However, a quantum bistable element, called a qubit, has a new, in comparison with the classical, property of superposition of states: it can be in any superposition state , where are complex numbers, . States of a quantum system from P two-level particles have, in the general case, the form of a superposition 2 n basic condition . Ultimately, the quantum principle of superposition of states makes it possible to impart fundamentally new "capacities" to a quantum computer.

It has been proven that a quantum computer can be built from only two elements (gates): a single-qubit element and a two-qubit controlled NOT (CNOT) element. Matrix 2x2 element looks like:

(1)

The gate describes the rotation of the qubit state vector from the z axis to the polar axis given by the angles . If are irrational numbers, then multiple application of the state vector can be given any predetermined orientation. This is precisely the "universality" of the one-qubit gate in the form (1). In a particular case, we obtain a one-qubit logical element NOT (NOT): NOT=, NOT=. During the physical implementation of the element, it is NOT necessary to influence the quantum particle (qubit) with an impulse from the outside, transferring the qubit from one state to another. The controlled NOT gate is executed by acting on two interacting qubits: in this case, through interaction, one qubit controls the evolution of the other. Transitions under the influence of external pulses are well known in pulsed magnetic resonance spectroscopy. The NOT gate corresponds to a spin flip under the action of an impulse (rotation of the magnetization around the axis by an angle ) . The CNOT gate is executed on two backs 1/2 with the Hamiltonian (spin controls ). CNOT is performed in three steps: pulse + free precession over time - pulse. If (the controlling qubit is in the state ), then under the indicated influences, the controlled qubit makes transitions (or ). If (the controlling qubit is in the state ), then the result of the evolution of the controlled qubit will be different: (). Thus, the spin , evolves differently for : here in is the state of the controlling qubit.

When considering the implementation of a quantum computer on certain quantum systems, the realizability and properties of elementary NOT gates and controlled NOT are primarily investigated.

For further purposes, it is also useful to introduce the one-qubit Hadamard transform:

In the technique of magnetic resonance, these valves are carried out by impulses:

The diagram of a quantum computer is shown in the figure. Before the computer starts working, all qubits (quantum particles) must be brought into the state , i.e. to the base state. This condition is not trivial in itself.


It requires either deep cooling (to temperatures on the order of a millikelvin) or the use of polarization techniques. system P qubits in a state can be thought of as a memory register prepared for writing input data and performing calculations. In addition to this register, the existence of additional (auxiliary) registers is usually assumed to be necessary for recording intermediate results of calculations. Data recording is carried out by one or another impact on each qubit of the computer. Let us assume, for example, that a Hadamard transformation is performed on each qubit of the register:

As a result, the system went into a state of superposition from 2 p basis states with amplitude 2 - n /2 . Each basic state is a binary number from to . The horizontal lines in the figure represent the time axes.

The execution of the algorithm is done by a unitary transformation of the superposition. is a unitary matrix of dimension 2 p. When physically implemented by means of impulse influences on qubits from the outside, the matrix must be represented as a vector product of matrices of dimension 2 and . The latter can be performed by sequential action on single qubits or pairs of qubits :

The number of factors in this expansion determines the duration (and complexity) of the calculations. Everything in (3) is performed using the operations NOT, CNOT, H (or their varieties).

It is remarkable that the linear unitary operator acts simultaneously on all members of the superposition

The results of the calculation are written in a spare register, which was in the state before being applied. In one run of the computational process, we obtain the values ​​of the desired function f for all values ​​of the argument X = 0,..., 2 p - 1 . This phenomenon is called quantum parallelism.

Measuring the calculation result is reduced to projecting the superposition vector in (4) onto the vector of one of the basic states :

(5)

Here comes one of weaknesses quantum computer: the number in the measurement process "falls out" according to the law of chance. To find for a given , it is necessary to carry out calculations and measurements many times, until it accidentally falls out .

When analyzing the unitary evolution of a quantum system performing a computational process, the importance of physical processes such as interference is revealed. Unitary transformations are performed in the space of complex numbers, and the addition of the phases of these numbers has the character of interference. The productivity of Fourier transforms in the phenomena of interference and spectroscopy is known. It turned out that Fourier transforms are invariably present in quantum algorithms. The Hadamard transform is the simplest discrete Fourier transform. Gates of the NOT and CNOT types can be implemented directly on the Mach-Zender interferometer using the phenomenon of photon interference and rotation of its polarization vector.

Various ways of physical realization of quantum computers are investigated. Model experiments on quantum computing were performed on a pulsed nuclear magnetic resonance spectrometer. Two or three spins (qubits) worked in these models, for example, two spins of 13 C nuclei and one proton spin in a trichlorethylene molecule

However, in these experiments, the quantum computer was "ensemble": the output signals of the computer are composed of a large number of molecules in a liquid solution (~ 10 20).

To date, proposals have been made on the implementation of quantum computers on ions and molecules in traps in a vacuum, on nuclear spins in liquids (see above), on the nuclear spins of 31 P atoms in crystalline silicon, on the spins of electrons in quantum dots created in a two-dimensional electron gas in GaAs heterostructures, and on Josephson junctions. As we see, in principle, a quantum computer can be built on atomic particles in vacuum, liquids, crystals. At the same time, in each case, one or another obstacles have to be overcome, but among them there are several general ones, due to the principles of operation of qubits in a quantum computer. Let us set the task of creating a full-scale quantum computer containing, say, 10 3 qubits (although with n = 100 quantum computer can be a useful tool).

1. We need to find ways to "initialize" computer qubits into the state . For spin systems in crystals, the use of ultralow temperatures and superstrong magnetic fields is obvious. The use of spin polarization by pumping can be useful in the simultaneous application of cooling and high magnetic fields.

For ions in vacuum traps, ultralow cooling of ions (atoms) is achieved by laser methods. The need for cold and ultra-high vacuum is also obvious.

2. It is necessary to have the technology of selective impact of pulses on any chosen qubit. In the field of radio frequencies and spin resonance, this means that each spin must have its own resonant frequency (in terms of spectroscopic resolution). Differences in resonant frequencies for spins in molecules are due to chemical shifts for the spins of one isotope and one element; the necessary frequency differences exist for the spins of the nuclei of various elements. However, common sense dictates that these natural differences in resonant frequencies are hardly sufficient to work with 10 3 spins.

More promising are approaches where the resonant frequency of each qubit can be controlled from the outside. In the proposal for a silicon quantum computer, the qubit is the nuclear spin of the impurity atom 31 R. The resonance frequency is determined by the constant A of the hyperfine interaction of the nuclear and electronic spins of the 31 P atom. The electric field on the nanoelectrode located above the 31 P atom polarizes the atom and changes the constant A(respectively, the resonant frequency of the nuclear spin). Thus, the presence of an electrode embeds the qubit in the electronic circuit and tunes its resonant frequency.

3. To perform the CNOT (controlled NOT) operation, an interaction between qubits and of the form . Such an interaction occurs between the spins of nuclei in a molecule if the nuclei and are separated by one chemical bond. In principle, it is necessary to be able to perform the operation for any pair of qubits . It is hardly possible to have a physical interaction of qubits of the same size scale and on the principle of "everything with everyone" in the natural environment. There is an obvious need for a way to adjust the environment between qubits from the outside by introducing electrodes with a controlled potential. In this way, it is possible to create, for example, an overlap of the wave functions of electrons in neighboring quantum dots and the appearance of an interaction of the form between electron spins [. The overlap of the wave functions of the electrons of neighboring 31 P atoms causes the appearance of an interaction of the form between nuclear spins.

To provide the operation , where and are distant qubits between which there is no interaction of the form, it is necessary to apply the state exchange operation in the computer along the chain so that it provides the operation , since the state coincides with the state .

4. In the course of performing a unitary transformation corresponding to the chosen algorithm, computer qubits are affected by the environment; as a result, the amplitude and phase of the qubit state vector experience random changes - decoherence. Essentially, decoherence is the relaxation of those degrees of freedom of the particle that are used in the qubit. The decoherence time is equal to the relaxation time. In nuclear magnetic resonance in liquids, the times and relaxations are 1–10 s. For ions in traps with optical transitions between levels E 0 And E 1 the decoherence time is the time of spontaneous emission and the time of collisions with residual atoms. Obviously, decoherence is a serious obstacle to quantum computing: the started computational process acquires the features of randomness after the decoherence time has elapsed. However, it is possible to achieve a stable quantum computing process for an arbitrarily long time τ > m if systematically using the methods of quantum coding and error correction (phase and amplitude). It has been proved that with relatively low requirements for the error-free execution of elementary operations such as NOT and CHOT (the error probability is not more than 10 -5), quantum error correction (QEC) methods ensure stable operation of a quantum computer.

Active suppression of the decoherence process is also possible if periodic measurements are made on the system of qubits. The measurement with a high probability will detect the particle in the "correct" state, and small random changes in the state vector during the measurement will collapse (the quantum Zeno effect). However, it is still difficult to say how useful such a technique can be, since such measurements themselves can affect the computational process and disrupt it.

5. The states of the qubits after the completion of the computational process must be measured to determine the result of the computation. Today, there is no mastered technology for such measurements. However, the way to search for such a technology is obvious: it is necessary to use amplification methods in quantum measurement. For example, the nuclear spin state is transferred to the electron spin; the orbital wave function depends on the latter; knowing the orbital wave function, it is possible to organize the transfer of charges (ionization); the presence or absence of a single electron charge can be detected by classical electrometric methods. Probe force microscopy will probably play an important role in these measurements.

To date, quantum algorithms have been discovered that lead to exponential acceleration of calculations compared to calculations on a classical computer. These include Shor's algorithm for determining prime factors of large (multi-digit) numbers. This purely mathematical problem is closely connected with the life of society, since modern encryption codes are built on the "non-computability" of such factors. It was this circumstance that caused a sensation when Shor's algorithm was discovered. For physicists, it is important that the solution of quantum problems (the solution of the Schrödinger equation for many-particle systems) is exponentially accelerated if a quantum computer is used.

Finally, it is very important that in the course of research on quantum computing problems, the main problems of quantum physics are subjected to new analysis and experimental verification: problems of locality, reality, complementarity, hidden parameters, wave function collapse.

The ideas of quantum computing and quantum communication emerged a hundred years after the birth of the original ideas of quantum physics. The possibility of building quantum computers and communication systems has been shown by the theoretical and experimental studies carried out to date. Quantum physics is "sufficient" for designing quantum computers based on various "element bases". Quantum computers, if they can be built, will be the technology of the 21st century. Their manufacture will require the creation and development of new technologies at the nanometer and atomic scale. This work may take, apparently, several decades. The construction of quantum computers would be another confirmation of the principle of the inexhaustibility of nature: nature has the means to implement any task correctly formulated by man.

In a conventional computer, information is encoded as a sequence of bits, and these bits are sequentially processed by Boolean logic gates to produce the desired result. Similarly, a quantum computer processes qubits by performing a series of operations on quantum gates, each of which is a unitary transformation acting on a single qubit or pair of qubits. By sequentially performing these transformations, a quantum computer can perform a complex unitary transformation on the entire set of qubits prepared in some initial state. After that, you can make a measurement on the qubits, which will give the final result of the calculations. This similarity of computing between quantum and classical computers suggests that, at least in theory, a classical computer can exactly reproduce the operation of a quantum computer. In other words, a classical computer can do everything a quantum computer can do. Then why all this fuss with a quantum computer? The point is that although theoretically a classical computer can simulate a quantum computer, this is very inefficient, so inefficient that in practice a classical computer is not able to solve many problems that a quantum computer can do. Simulating a quantum computer on a classical computer is a computationally difficult problem because correlations between quantum bits are qualitatively different from correlations between classical bits, as was first shown by John Bell. For example, we can take a system of only a few hundred qubits. It exists in the Hilbert space with dimension ~10 90 , which would require, when simulated by a classical computer, the use of exponentially large matrices (to perform calculations for each individual state, which is also described by the matrix). This means that a classical computer will take exponentially longer than even a primitive quantum computer.

Richard Feynman was among the first to recognize the potential inherent in the phenomenon of quantum superposition to solve such problems much faster. For example, a system of 500 qubits, which is practically impossible to model classically, is a quantum superposition of 2 500 states. Each value of such a superposition is classically equivalent to a list of 500 ones and zeros. Any quantum operation on such a system, for example, a pulse of radio waves tuned in a certain way, which can perform a controlled NOT operation on, say, the 100th and 101st qubits, will simultaneously affect 2 500 states. Thus, for one tick of a computer clock, a quantum operation calculates not one machine state, like conventional computers, but 2 500 states immediately! However, in the end, a measurement is made on the system of qubits, and the system collapses into a single quantum state corresponding to a single solution to the problem, a single set of 500 ones and zeros, as dictated by the measurement axiom of quantum mechanics. This is a truly exciting result, since this solution, found by the collective process of quantum parallel computing, which has its roots in superposition, is equivalent to performing the same operation on a classical supercomputer with ~ 10 150 separate processors (which, of course, is impossible)!! The first researchers in this field were, of course, inspired by such gigantic possibilities, and so the real hunt for suitable problems for such computing power soon began. Peter Shor, a researcher and computer scientist at AT&T's Bell Laboratories in New Jersey, proposed a problem that could be solved specifically on a quantum computer and using a quantum algorithm. Shor's algorithm uses the power of quantum superposition to factor large numbers (on the order of ~10,200 bits and more) into factors in a few seconds. This problem has an important practical application for encryption, where the generally accepted (and best) encryption algorithm known as RSA , is based precisely on the difficulty of factoring large composite numbers into prime factors.A computer that solves this problem with ease is, of course, of great interest to many government organizations using RSA, which until now was considered "unbreakable", and to anyone who is interested in the security of their data.

Encryption, however, is only one possible application of a quantum computer. Shor has developed a whole set of mathematical operations that can only be performed on a quantum computer. Some of these operations are used in his factorization algorithm. Further, Feynman argued that a quantum computer could act as a simulator for quantum physics, potentially opening the door to many discoveries in this field. At present, the power and capabilities of a quantum computer are mainly the subject of theoretical considerations; the advent of the first truly functional quantum computer will undoubtedly bring many new and exciting practical applications.

Chapter III . Grover's algorithm

The search problem is as follows: there is an unordered database consisting of N-elements, of which only one satisfies the given conditions - this is the element that needs to be found. If the element can be inspected, then determining whether it satisfies the required conditions or not is done in one step. However, the database is such that there is no order in it that could help the selection of an element. The most efficient classical algorithm for this task is to check the elements from the database one by one. If the element satisfies the required conditions, the search is over, if not, then this element is postponed so that it is not subjected to verification again. Obviously, in this algorithm, it is required to check the average of the elements before the right one is found.

By implementing this algorithm, it is possible to use the same equipment as in the classical case, but specifying the input and output in the form superpositions states, you can find an object for O () quantum mechanical steps instead of ABOUT( N )) classic steps. Each quantum mechanical step consists of an elementary unitary operation, which we will consider below.

To implement this algorithm, we need the following three elementary operations. The first is the preparation of a state in which the system is with equal probability in any of its N basic states; the second is the Hadamard transform and the third is the selective phase rotation of the states.

As is known, the main operation for quantum computing is the operation M, acting on one bit, which is represented by the following matrix:

i.e., a bit in state 0 becomes a superposition of two states: (1/, 1/). Similarly, the bit in state 1 is transformed to (1/, -1/,), i.e. the magnitude of the amplitude for each state is 1/, but the phase in state 1 is reversed. The phase has no analogue in classical probabilistic algorithms. It arises in quantum mechanics, where the probability amplitude is complex. In a system in which the state is described P bits (i.e. there is N = 2 p possible states), we can transform M on each bit independently, sequentially changing the state of the system. In the case where the initial configuration was a configuration with P bits in the first state, the resulting pattern will have equal amplitudes for each of the states. This is the way to create a superposition with the same amplitude for all states.

The third transformation we need is the selective phase rotation of the amplitude in certain states. The transformation presented here for a two-state system is of the form:

Where j = And - arbitrary real numbers. Note that, unlike the Hadamard transform and other state transformation matrices, the probability of each state remains the same, since the square of the absolute magnitude of the amplitude in each state remains the same.

Consider the problem in an abstract form.

Let the system have N = 2 p states, which are denoted as ,..., . These 2 p states are represented as n-bit strings. Let there be a single state, say , that satisfies the condition C() = 1, while for all other states S, WITH( ,) = 0 (it is assumed that for any state S the condition is evaluated per unit of time). The task is to recognize the state

Let's move on to the algorithm itself

Steps (1) and (2) are a sequence of elementary unitary operations described earlier. Step (3) is the final measurement carried out by the external system.

(1) We bring the system into a state of superposition:

with the same amplitudes for each of the N states. This superposition can be obtained in steps.

(2) We repeat the following unitary operation ABOUT( ) once:

a. Let the system be in some state S:

When WITH( S ) = 1, rotate the phase by radians;

When С(S) = 0, leave the system unchanged.

b . Apply Diffusion Transform D which is determined by the matrix D as follows: if ;" and . D can be implemented as a sequential execution of unitary transformations: , where W is the Hadamard transformation matrix, R is the phase rotation matrix.

(3) Make a measurement of the received state. This state will be the state WITH( )„ (i.e., the desired state that satisfies the condition (C() = 1) with a probability of at least 0.5. Note that step (2a) is a phase rotation. Its implementation should include a procedure for recognizing the state and then determining whether or not to rotate the phase. It should be carried out in such a way as not to leave a trace on the state of the system, so that there is confidence that the paths leading to the same final state are indistinguishable and can interfere Note that this procedure Not includes the classical dimension.

This quantum search algorithm is likely to be easier to implement compared to many other well-known quantum mechanical algorithms, since the operations required are only the Walsh-Hadamard transform and the conditional phase shift operation, each of which is relatively simple compared to the operations used by other quantum mechanical algorithms.


Conclusion

Now quantum computers and quantum information technologies remain in the state of pioneering developments. Addressing the challenges these technologies now face will ensure that quantum computers break through to their rightful place as the fastest computing machines physically possible. To date, error correction has advanced significantly, bringing us closer to the point where we can build computers that are reliable enough to withstand the effects of decoherence. On the other hand, the creation of quantum equipment is still only an emerging industry; but the work done to date convinces us that it is only a matter of time before building machines large enough to run serious algorithms, such as Shor's algorithm. Thus, quantum computers will definitely appear. At the very least, these will be the most advanced computing devices, and modern computers will become obsolete. Quantum computing originates in very specific areas of theoretical physics, but its future will undoubtedly have a huge impact on the life of all mankind.


Bibliography

1. Quantum computing: pros and cons. Ed. V.A. Sadovnichy. - Izhevsk: Publishing House "Udmurt University", 1999. - 212 p.

2. V. E. Belonuchkin, D. A. Zaikin, and Yu. M. Tsipenyuk, Fundamentals of Physics. Course of general physics: Textbook. In 2 vols. Vol. 2. Quantum and statistical physics. - M.: FIZMATLIT, 2001. - 504 p.

3. Valiev K.A. “Quantum computers: can they be made “big”?”, Uspekhi fizicheskikh nauk, vol. 169, no. 6, 1999.

4. Valiev K.A. "Quantum informatics: computers, communications and cryptography", BULLETIN OF THE RUSSIAN ACADEMY OF SCIENCES, vol. 70, no. 8, p. 688-695, 2000

5. Maslov. D. "Quantum Computing and Communication: Reality and Prospects", Computerra, No. 46, 2004.

6. Khalfin L.A. "The Quantum Zeno Effect", Uspekhi fizicheskikh nauk, vol. 160, no. 10, 1990.

7. Holevo A. "Quantum informatics: past, present, future",

IN THE WORLD OF SCIENCE, No. 7, 2008

8. Center for Quantum Technologies, National University of Singapore www.quantumlah.org

In the Schrödinger representation, it is convenient to represent the change of a qubit in time under the action of unitary operators graphically. This approach is widely used in the field of quantum computing. The so-called quantum circuits serve as an analogue of the graphical representation of electrical circuits. They are also built from a set of gates or gates, similar to digital AND, OR, NOT gates, flip-flops, registers, adders, and so on.

Suppose we have a qubit in the basic state "0". Again, we can represent it as a column vector (1 0). If we apply it to the input of the gate, let's call it X, then the state vector will change. This gate is represented by a sigma-x Pauli matrix. Yes, Pauli matrices besides being Hermitian, they are also unitary. Not all Hermitian matrices are unitary, but Pauli matrices are.

So, by multiplying the Pauli X-matrix by the original vector, we get the column vector (0 1). It is the second basis ket vector |1>. That is, this gate translated 0 into one. This gate is also called NOT because it performs negation, inversion. Indeed, if we further put another such gate, then we will return to the zero state.

Unlike classical bits, a qubit can be in a superposition of basis vectors. The next gate is called the Hadamard gate and is represented by the following unitary matrix. It transforms state zero into superposition |0>+|1>.

Note that when this matrix acts on the ket vector |1>, it translates it into |0>-|1>.

With the help of these two gates, we can graphically represent the previous video experiment with a Mach-Zehnder interferometer. The matrices presented by us are identical to the evolution operators considered there. The passage of a photon through a semitransparent mirror corresponds to the Hadamard gate. The mirror has an X inversion gate. The second semitransparent mirror is also represented by a Hadamard gate. The first gate transfers the qubit to the superposition, the second does nothing with the resulting state, and the third transfers the superposition back to the basis vector.

Two-qubit states are represented graphically by adding another horizontal line. Now the original vector is in the |00> state, which is equal to the tensor product of the corresponding one-qubit vectors. It is represented as a column vector with four components.

You can, for example, put a Hadamard gate on each qubit. In fact, this means that the original vector must be acted upon by the tensor product of two Hadamard matrices. We have a 4x4 matrix multiplied by a four-component column vector. The result will also be a four-component column vector.

However, not every unitary 4x4 matrix can be decomposed into a tensor product of 2x2 matrices. An example is the common gate CNOT - controlled negation. It must be applied to the entire two-qubit state vector at once. Usually it is denoted by such two circles.

The most general two-qubit state vector is described by a superposition of four basis vectors. Therefore, to describe it, 4 complex numbers are needed - the probability amplitudes.

For a three-qubit vector, the superposition will consist of 2 3 , that is, eight terms. Unitary operators acting on such an eight-component column vector are represented by 8x8 matrices. That is why, in the case of quantum computing, modeling on a classical computer becomes impossible even with a small number of qubits.

So, to operate with a hundred-qubit state, it is necessary to store 2,100 complex numbers just to describe the vector itself. 2 100 is already more than the number of elementary particles in the observable part of the Universe. That is why humanity needs a hardware quantum computer, and not its classical imitator.

On the Internet, you can find simulators of quantum circuits and experiment with them. Here is one of them, called quirk. At the output, it shows the probability of finding a unit when measuring a qubit. Also the Bloch sphere, which graphically displays the qubit as a dot on the sphere. And a graphical representation of the probability amplitudes - two complex numbers for one qubit, four for a two-qubit state.

Initially, we have a two-qubit vector in the basis vector state |00>. That is, the corresponding probability amplitude is equal to one, and the other three are zero. But in the general case, all four amplitudes are nonzero. For clarity, let's put some gates, the matrices of which themselves change over time. Well, for example, CNOT gate. We see that all four probability amplitudes change their value.

Let's assemble a circuit corresponding to our experience with the Mach-Zehnder interferometer. Set up a Hadamard gate. The probability of getting a unit as a result of the measurement became 50%. The probability amplitudes themselves became 0.707, that is, for zero and for one.

Let's put a NOT-gate, that is, the Pauli X matrix. Nothing has changed. The second Hadamard gate returned the state vector to the original basis vector. Note that when moving to a three-qubit vector, there are already eight amplitudes. For a four-qubit 16. And so on. This simulator can run with a maximum of 16 tibit state. To do this, it uses at least 2 16 , that is, 64kB of memory. For 32 qubits, you need at least 4GB of memory. The required resources are growing very fast. In this simulator, there are already assembled schemes of popular algorithms. Here, for example, is the chain for testing Bell's inequalities, which we considered in parts 26 and 27.

However, one should not imagine a quantum computer as an analog of a classical one, but with exponentially greater computing power. As they often say in scientific pop - built-in quantum parallelism. Indeed, there are algorithms that allow solving some problems on a quantum computer in an acceptable time, while on a classical one it would take billions of years. But these problems are very specific, such as taking the discrete logarithm of large numbers or factoring large numbers.

That is, a quantum computer is not always much faster than a classical one. Rather, it can be viewed as a specialized processor for a narrow range of tasks. The same operations with matrices or modeling of quantum phenomena, for example, for chemistry problems.

But who knows how this area will develop when the technology reaches the mass production of cheap multi-qubit quantum processors.

Candidate of Physical and Mathematical Sciences L. FEDICHKIN (Institute of Physics and Technology Russian Academy Sciences.

Using the laws of quantum mechanics, it is possible to create a fundamentally new type of computers that will allow solving some problems that are inaccessible even to the most powerful modern supercomputers. The speed of many complex calculations will increase dramatically; messages sent over quantum communication lines can neither be intercepted nor copied. Today, prototypes of these quantum computers of the future have already been created.

American mathematician and physicist of Hungarian origin Johann von Neumann (1903-1957).

American theoretical physicist Richard Phillips Feynman (1918-1988).

American mathematician Peter Shor, a specialist in the field of quantum computing. He proposed a quantum algorithm for fast factorization of large numbers.

Quantum bit or qubit. The states and correspond, for example, to the direction of the spin of the atomic nucleus up or down.

A quantum register is a chain of quantum bits. One- or two-qubit quantum gates perform logical operations on qubits.

INTRODUCTION, OR A LITTLE ABOUT INFORMATION PROTECTION

What do you think is the most licensed program in the world? I won't venture to insist that I know the right answer, but I do know one wrong one: this is Not any version of Microsoft Windows. The most common operating system is ahead of a modest product from RSA Data Security, Inc. - a program that implements the RSA public key encryption algorithm, named after its authors - American mathematicians Rivest, Shamir and Adelman.

The fact is that the RSA algorithm is built into most of the operating systems sold, as well as many other applications used in various devices - from smart cards to cell phones. In particular, it is also available in Microsoft Windows, which means that it is obviously more widespread than this popular operating system. To detect traces of RSA, for example, in the Internet Explorer browser (a program for viewing www-pages on the Internet), just open the "Help" menu, enter the "About Internet Explorer" submenu and view the list of third-party products used. Another common browser, Netscape Navigator, also uses the RSA algorithm. In general, it is difficult to find a well-known high-tech company that would not buy a license for this program. To date, RSA Data Security, Inc. has already sold over 450 million(!) licenses.

Why is the RSA algorithm so important?

Imagine that you need to quickly exchange a message with a person who is far away. Thanks to the development of the Internet, such an exchange has become available today to most people - you just need to have a computer with a modem or network card. Naturally, when exchanging information over the network, you would like to keep your messages secret from outsiders. However, it is impossible to completely protect an extended communication line from eavesdropping. This means that when sending messages, they must be encrypted, and when received, they must be decrypted. But how do you and your interlocutor agree on which key you will use? If you send the key to the cipher along the same line, then an eavesdropping attacker can easily intercept it. You can, of course, send the key over some other communication line, for example, send it by telegram. But such a method is usually inconvenient and, moreover, not always reliable: another line can also be tapped. It’s good if you and your addressee knew in advance that you would exchange encryptions, and therefore handed over the keys to each other in advance. But what if, for example, you want to send a confidential commercial offer to a potential business partner or buy a product you like in a new online store using a credit card?

In the 1970s, encryption systems were proposed to solve this problem, using two kinds of keys for the same message: open (not requiring secret storage) and closed (strictly secret). The public key is used to encrypt the message, and the private key is used to decrypt it. You send your correspondent the public key, and he encrypts his message with it. All that an attacker who has intercepted the public key can do is to encrypt his letter with it and send it to someone. But he will not be able to decipher the correspondence. You, knowing the private key (it is initially stored with you), will easily read the message addressed to you. To encrypt the response messages, you will use the public key sent by your correspondent (and he keeps the corresponding private key for himself).

Just such a cryptographic scheme is used in the RSA algorithm - the most common public key encryption method. Moreover, the following important hypothesis is used to create a pair of public and private keys. If there are two large ones (requiring more than a hundred decimal digits for their entry) simple numbers M and K, then it will not be difficult to find their product N=MK (it is not even necessary to have a computer for this: a fairly accurate and patient person can multiply such numbers with a pen and paper). But to solve the inverse problem, that is, knowing big number N, decompose it into prime factors M and K (the so-called factorization problem) - almost impossible! It is this problem that an attacker who decides to "crack" the RSA algorithm and read the information encrypted with it will face: in order to find out the private key, knowing the public key, one will have to calculate M or K.

To test the validity of the hypothesis about the practical complexity of factoring large numbers, special competitions have been and are still being held. The record is the decomposition of only 155-digit (512-bit) number. The calculations were carried out in parallel on many computers over the course of seven months in 1999. If this task were performed on one modern personal computer, it would take about 35 years of computer time! Calculations show that using even a thousand modern workstations and the best computational algorithm known today, one 250-digit number can be factorized in about 800 thousand years, and a 1000-digit number in 10 25 (!) years. (For comparison, the age of the Universe is ~10 10 years.)

Therefore, cryptographic algorithms like RSA, operating with sufficiently long keys, were considered absolutely reliable and were used in many applications. And everything was fine until then ...until quantum computers appeared.

It turns out that using the laws of quantum mechanics, you can build computers for which the factorization problem (and many others!) Is not difficult. It is estimated that a quantum computer with only about 10,000 quantum bits of memory can factorize a 1,000-digit number into prime factors in just a few hours!

HOW IT ALL BEGAN?

It was not until the mid-1990s that the theory of quantum computers and quantum computing established itself as a new area Sciences. As is often the case with great ideas, it's hard to single out a pioneer. Apparently, the Hungarian mathematician I. von Neumann was the first to draw attention to the possibility of developing quantum logic. However, at that time, not only quantum, but also ordinary, classical computers had not yet been created. And with the advent of the latter, the main efforts of scientists turned out to be directed primarily to the search and development of new elements for them (transistors, and then integrated circuits), and not to the creation of fundamentally different computing devices.

In the 1960s, the American physicist R. Landauer, who worked at IBM Corporation, tried to draw the attention of the scientific world to the fact that calculations are always some kind of physical process, which means that it is impossible to understand the limits of our computing capabilities without specifying which physical implementation they correspond to. Unfortunately, at that time, the prevailing view among scientists was that computation was an abstract logical procedure that should be studied by mathematicians, not physicists.

As computers proliferated, scientists involved in quantum objects came to the conclusion that it was practically impossible to directly calculate the state of an evolving system consisting of only a few dozen interacting particles, such as a methane (CH 4) molecule. This is explained by the fact that for a complete description of a complex system, it is necessary to keep in the computer memory an exponentially large (in terms of the number of particles) number of variables, the so-called quantum amplitudes. A paradoxical situation arose: knowing the equation of evolution, knowing with sufficient accuracy all the potentials of the interaction of particles with each other and the initial state of the system, it is practically impossible to calculate its future, even if the system consists of only 30 electrons in a potential well, and a supercomputer with RAM, the number of bits of which is equal to the number of atoms in the visible region of the Universe(!). And at the same time, to study the dynamics of such a system, one can simply set up an experiment with 30 electrons, placing them in a given potential and initial state. In particular, the Russian mathematician Yu. I. Manin drew attention to this, pointing out in 1980 the need to develop a theory of quantum computing devices. In the 1980s, the same problem was studied by the American physicist P. Benev, who clearly showed that a quantum system can perform calculations, as well as the English scientist D. Deutsch, who theoretically developed a universal quantum computer that surpasses its classical counterpart.

The Nobel Prize winner in physics R. Feynman, who is well known to regular readers of Science and Life, attracted much attention to the problem of developing quantum computers. Thanks to his authoritative appeal, the number of specialists who paid attention to quantum computing has increased many times over.

And yet, for a long time it remained unclear whether the hypothetical computing power of a quantum computer could be used to speed up the solution of practical problems. But in 1994, the American mathematician, an employee of Lucent Technologies (USA), P. Shor, stunned the scientific world by proposing a quantum algorithm that allows fast factorization of large numbers (the importance of this problem has already been discussed in the introduction). Compared to the best classical method known today, Shor's quantum algorithm gives a multiple acceleration of calculations, and the longer the factorizable number, the greater the gain in speed. The fast factorization algorithm is of great practical interest for various special services that have accumulated banks of undecrypted messages.

In 1996, Shor's colleague at Lucent Technologies, L. Grover, proposed a quantum fast search algorithm in an unordered database. (An example of such a database is a telephone book, in which the names of subscribers are not arranged alphabetically, but arbitrarily.) The task of finding, choosing the optimal element among numerous options is very common in economic, military, engineering problems, in computer games. Grover's algorithm allows not only to speed up the search process, but also to approximately double the number of parameters taken into account when choosing the optimum.

The real creation of quantum computers was hindered, in essence, by the only serious problem - errors, or interference. The fact is that the same level of interference spoils the process of quantum computing much more intensively than classical ones. Ways to solve this problem were outlined in 1995 by P. Shor, who developed a scheme for encoding quantum states and correcting errors in them. Unfortunately, the topic of error correction in quantum computers is as important as it is difficult to cover in this article.

DEVICE OF A QUANTUM COMPUTER

Before describing how a quantum computer works, let us recall the main features of quantum systems (see also "Science and Life" No. 8, 1998; No. 12, 2000).

To understand the laws of the quantum world, one should not rely directly on everyday experience. In the usual way (in everyday understanding) quantum particles behave only if we constantly "spy" on them, or, more strictly speaking, constantly measure what state they are in. But as soon as we "turn away" (stop observing), quantum particles immediately pass from a completely definite state into several different hypostases at once. That is, an electron (or any other quantum object) will be partly at one point, partly at another, partly at a third, and so on. This does not mean that it is divided into segments, like an orange. Then it would be possible to reliably isolate some part of the electron and measure its charge or mass. But experience shows that after the measurement, the electron always turns out to be "safe and sound" at one single point, despite the fact that before that it had time to visit almost everywhere at the same time. This state of an electron, when it is located at several points in space at once, is called superposition of quantum states and are usually described by the wave function introduced in 1926 by the German physicist E. Schrödinger. The absolute value of the wave function at any point, squared, determines the probability of finding a particle at that point at a given moment. After measuring the position of a particle, its wave function, as it were, contracts (collapses) to the point where the particle was detected, and then begins to spread again. The property of quantum particles to be in many states at the same time, called quantum parallelism, has been successfully used in quantum computing.

quantum bit

The basic unit of a quantum computer is a quantum bit, or, for short, qubit(q-bits). This is a quantum particle that has two basic states, which are denoted 0 and 1, or, as is customary in quantum mechanics, and. Two values ​​of a qubit can correspond, for example, to the ground and excited states of an atom, the up and down directions of the spin of the atomic nucleus, the direction of the current in a superconducting ring, two possible positions of an electron in a semiconductor, and so on.

quantum register

The quantum register is arranged almost in the same way as the classical one. This is a chain of quantum bits over which one- and two-bit logical operations can be performed (similar to the use of NOT, 2AND-NOT, etc. operations in a classical register).

The basic states of a quantum register formed by L qubits include, just as in the classical one, all possible sequences of zeros and ones of length L. In total, there can be 2 L different combinations. They can be considered as a record of numbers in binary form from 0 to 2 L -1 and denoted. However, these basic states do not exhaust all possible values ​​of the quantum register (in contrast to the classical one), since there are also superposition states defined by complex amplitudes associated with the normalization condition. Most of the possible values ​​of the quantum register (with the exception of the basic ones) simply do not have a classical analogue. The states of the classical register are only a pitiful shadow of the entire wealth of states of a quantum computer.

Imagine that an external influence is carried out on the register, for example, electrical impulses are applied to a part of space or laser beams are directed. If this is a classical register, an impulse, which can be considered as a computational operation, will change L variables. If this is a quantum register, then the same impulse can simultaneously transform to variables. Thus, a quantum register, in principle, is capable of processing information many times faster than its classical counterpart. This immediately shows that small quantum registers (L<20) могут служить лишь для демонстрации отдельных узлов и принципов работы квантового компьютера, но не принесут большой практической пользы, так как не сумеют обогнать современные ЭВМ, а стоить будут заведомо дороже. В действительности квантовое ускорение обычно значительно меньше, чем приведенная грубая оценка сверху (это связано со сложностью получения большого количества амплитуд и считывания результата), поэтому практически полезный квантовый компьютер должен содержать тысячи кубитов. Но, с другой стороны, понятно, что для достижения действительного ускорения вычислений нет необходимости собирать миллионы квантовых битов. Компьютер с памятью, измеряемой всего лишь в килокубитах, будет в некоторых задачах несоизмеримо быстрее, чем классический суперкомпьютер с терабайтами памяти.

However, it should be noted that there is a class of problems for which quantum algorithms do not provide significant acceleration compared to classical ones. One of the first to show this was the Russian mathematician Yu. Ozhigov, who built a number of examples of algorithms that are fundamentally not accelerated on a quantum computer by a single clock cycle.

Nevertheless, there is no doubt that computers operating according to the laws of quantum mechanics are a new and decisive stage in the evolution of computing systems. It remains only to build them.

QUANTUM COMPUTERS TODAY

Prototypes of quantum computers already exist today. True, so far only small registers, consisting of only a few quantum bits, have been experimentally assembled. For example, recently a group led by the American physicist I. Chang (IBM) announced the assembly of a 5-bit quantum computer. Undoubtedly, this is a great success. Unfortunately, the existing quantum systems are not yet capable of providing reliable calculations, as they are either insufficiently controllable or very susceptible to noise. However, there are no physical prohibitions on building an efficient quantum computer, it is only necessary to overcome technological difficulties.

There are several ideas and proposals on how to make reliable and easily manageable quantum bits.

I. Chang develops the idea of ​​using the spins of the nuclei of some organic molecules as qubits.

Russian researcher M. V. Feigelman, who works at the Institute of Theoretical Physics. L. D. Landau, Russian Academy of Sciences, proposes to assemble quantum registers from miniature superconducting rings. Each ring plays the role of a qubit, and states 0 and 1 correspond to the direction of the electric current in the ring - clockwise and counterclockwise. Such qubits can be switched by a magnetic field.

At the Institute of Physics and Technology of the Russian Academy of Sciences, a group led by Academician K. A. Valiev proposed two options for placing qubits in semiconductor structures. In the first case, the role of a qubit is played by an electron in a system of two potential wells created by a voltage applied to mini-electrodes on the semiconductor surface. States 0 and 1 are the positions of the electron in one of these wells. The qubit is switched by changing the voltage on one of the electrodes. In another version, the qubit is the nucleus of a phosphorus atom embedded at a certain point in the semiconductor. States 0 and 1 - the direction of the spin of the nucleus along or against the external magnetic field. The control is carried out using the joint action of magnetic pulses of resonant frequency and voltage pulses.

Thus, research is being actively conducted and it can be assumed that in the very near future - in ten years - an effective quantum computer will be created.

A LOOK INTO THE FUTURE

Thus, it is very possible that in the future quantum computers will be manufactured using traditional microelectronic technology methods and contain many control electrodes, resembling a modern microprocessor. In order to reduce the level of noise, which is critical for the normal operation of a quantum computer, the first models will most likely have to be cooled with liquid helium. It is likely that the first quantum computers will be bulky and expensive devices that do not fit on a desk and are manned by a large staff of white-coated system programmers and hardware technicians. At first, only state structures will have access to them, then rich commercial organizations. But the era of conventional computers began in much the same way.

And what will happen to classical computers? Will they die? Hardly. Both classical and quantum computers have their own applications. Although, apparently, the ratio in the market will still gradually shift towards the latter.

The introduction of quantum computers will not lead to the solution of fundamentally unsolvable classical problems, but will only speed up some calculations. In addition, quantum communication will become possible - the transfer of qubits over a distance, which will lead to the emergence of a kind of quantum Internet. Quantum communication will provide a protected (by the laws of quantum mechanics) from eavesdropping connection of everyone with each other. Your information stored in quantum databases will be more protected from copying than it is now. Companies producing programs for quantum computers will be able to protect them from any, including illegal, copying.

For a deeper understanding of this topic, you can read the review article by E. Riffel, V. Polak "Fundamentals of Quantum Computing", published in the Russian journal "Quantum Computers and Quantum Computing" (No. 1, 2000). (By the way, this is the first and so far the only journal in the world devoted to quantum computing. Additional information about it can be found on the Internet at http://rcd.ru/qc .). Having mastered this work, you will be able to read scientific articles on quantum computing.

A somewhat greater preliminary mathematical preparation will be required when reading the book by A. Kitaev, A. Shen, M. Vyaly "Classical and Quantum Computing" (Moscow: MTsNMO-Chero, 1999).

A number of fundamental aspects of quantum mechanics that are essential for quantum computing are analyzed in the book "Quantum teleportation - an ordinary miracle" by V. V. Belokurov, O. D. Timofeevskaya, O. A. Khrustalev (Izhevsk: RHD, 2000).

The RCD publishing house is preparing to publish a translation of A. Steen's review on quantum computers as a separate book.

The following literature will be useful not only in cognitive, but also in historical terms:

1) Yu. I. Manin. Computable and non-computable.

M.: Sov. radio, 1980.

2) I. von Neumann. Mathematical foundations of quantum mechanics.

Moscow: Nauka, 1964.

3) R. Feynman. Simulation of physics on computers // Quantum computer and quantum computing:

Sat. in 2 volumes - Izhevsk: RHD, 1999. Vol. 2, p. 96-123.

4) R. Feynman. Quantum mechanical computers

// Ibid., p. 123.-156.

See in a room on the same topic

The reason such modeling is important is that classical digital computers can do little to nothing with multi-reference states; in many cases, classical methods of calculation, not only quantitatively, but also qualitatively, are unable to describe the electronic structure of molecules.

An important problem that was recently solved was to find ways in which a quantum computer could perform calculations efficiently and with the required chemical accuracy for the real world. The program was run on a 20-qubit IBM processor.

Why has chemistry become such an area of ​​interest? Chemistry is one of the most profitable commercial applications for a number of reasons. Scientists hope to find more energy-efficient materials that can be used in batteries or solar panels. There are also environmental benefits: about two percent of the world's energy goes into producing fertilizers, which are horribly inefficient and can be improved through sophisticated chemical analysis.

Finally, there are applications in personalized medicine, with the ability to predict how pharmaceuticals will affect people based on their genetics. In the long term - the ability to develop a drug for a specific person for the most effective treatment and minimize side effects.

CQC and JSR Corp had two strategies that allowed scientists to make this breakthrough. First, they used their own CQC compiler to convert the computer program into instructions for manipulating the qubit in the most efficient way. Such efficiency is especially important on today's low-qubit machines, where every qubit is important and necessary, and speed of execution is critical.

Second, they used quantum machine learning, a special subfield of machine learning that uses vector amplitudes, not just probabilities. The quantum machine learning method used was specially designed for low-qubit quantum computers, with partial offloading using traditional processors.

In the next few years, a significant improvement in quantum both hardware and software is expected. As calculations become more precise, more industries can take advantage of the applications of quantum computing, including quantum chemistry. Gartner predicts that in four years, 20% of corporations will have a budget for quantum computing. In ten years, they will become an integral component of technology.

The creation of a universal quantum computer is one of the most difficult tasks of modern physics, the solution of which will radically change the ideas of mankind about the Internet and methods of information transfer, cybersecurity and cryptography, electronic currencies, artificial intelligence and machine learning systems, methods for the synthesis of new materials and drugs, approaches to modeling complex physical, quantum and ultra-large (Big Data) systems.

The exponential growth of dimension when trying to calculate real systems or the simplest quantum systems is an insurmountable obstacle for classical computers. However, in 1980, Yuri Manin and Richard Feynman (in 1982, but at greater length) independently put forward the idea of ​​using quantum systems for computation. Unlike classical modern computers, in quantum circuits, qubits (quantum bits) are used in calculations, which by their nature are quantum two-level systems and provide the possibility of direct use of the phenomenon of quantum superposition. In other words, this means that a qubit can simultaneously be in states |0> and |1>, and two interconnected qubits can be simultaneously in states |00>, |10>, |01> and |11>. It is this property of quantum systems that should provide an exponential increase in the performance of parallel computing, making quantum computers millions of times faster than the most powerful modern supercomputers.

In 1994, Peter Shor proposed a quantum algorithm for factoring numbers into prime factors. The question of the existence of an effective classical solution to this problem is extremely important and is still open, while the Shor quantum algorithm provides exponential acceleration relative to the best classical analogue. For example, a modern supercomputer in the petaflop range (10 15 operations/sec) can decompose a number with 500 decimal places in 5 billion years, a quantum computer in the megahertz range (10 6 operations/sec) would solve the same problem in 18 seconds. It is important to note that the complexity of solving this problem is the basis of the popular RSA cryptographic protection algorithm, which, after the creation of a quantum computer, will simply lose its relevance.

In 1996, Lov Grover proposed a quantum algorithm for solving the enumeration (search) problem with quadratic acceleration. Despite the fact that the speedup of the Grover algorithm is noticeably lower than the Shor algorithm, its wide range of applications is important, and the obvious impossibility of speeding up the classical variant of enumeration. Today, more than 40 efficient quantum algorithms are known, most of which are based on the ideas of Shor and Grover algorithms, the implementation of which is an important step towards the creation of a universal quantum computer.

The implementation of quantum algorithms is one of the priorities of the REC FMN. Our research in this area is aimed at developing multi-qubit superconducting quantum integrated circuits for creating universal quantum information processing systems and quantum simulators. The basic element of such circuits are Josephson tunnel junctions, which consist of two superconductors separated by a thin barrier - a dielectric with a thickness of about 1 nm. Superconducting qubits based on Josephson junctions, when cooled in dissolution cryostats to almost absolute zero temperature (~20 mK), exhibit quantum mechanical properties, demonstrating the quantization of electric charge (charge qubits), phase, or magnetic field flux (flux qubits), depending on their design. To combine qubits into circuits, capacitive or inductive connecting elements, as well as superconducting coplanar resonators, are used, and control is carried out by microwave pulses with controlled amplitude and phase. Superconducting circuits are particularly attractive due to the fact that they can be fabricated by planar mass technologies used in the semiconductor industry. At REC FMN, we use equipment (R&D class) from the world's leading manufacturers, specially designed and created for us, taking into account the peculiarities of the technological processes for manufacturing superconducting quantum integrated circuits.

Despite the fact that the quality indicators of superconducting qubits have grown over the past 15 years by almost several orders of magnitude, superconducting quantum integrated circuits are still very unstable compared to classical processors. Building a reliable universal multi-qubit quantum computer requires solving a large number of physical, technological, architectural and algorithmic problems. The REC FMN has formed a comprehensive research and development program in the direction of creating multi-qubit superconducting quantum circuits, including:

  • methods of formation and research of new materials and interfaces;
  • design and manufacturing technology of elements of quantum circuits;
  • scalable fabrication of high-coherence qubits and high-Q resonators;
  • tomography (characteristics measurements) of superconducting qubits;
  • control of superconducting qubits, quantum switching (entanglement);
  • detection methods and error correction algorithms;
  • development of the architecture of multi-qubit quantum circuits;
  • superconducting parametric amplifiers with quantum noise level.

Due to their non-linear, ultra-low loss properties (by nature) and scalability (lithographic fabrication), Josephson junctions are extremely attractive for creating quantum superconducting circuits. Often, for the manufacture of a quantum circuit, it is required to form hundreds and thousands of Josephson junctions with characteristic dimensions of the order of 100 nm per crystal. In this case, reliable operation of the circuits is realized only if the transition parameters are accurately reproduced. In other words, all transitions of quantum circuits must be exactly the same. To do this, they resort to the use of the most modern methods of electron beam lithography and subsequent high-precision shadow deposition through resistive or hard masks.

The formation of Josephson junctions is carried out by standard methods of ultra-high resolution lithography using two-layer resistive or hard masks. When such a two-layer mask is developed, windows are formed for the deposition of superconductor layers at such angles that, as a result of the processes, the deposited layers are superimposed. Before the deposition of the second layer of the superconductor, a tunnel dielectric layer of the Josephson junction of very high quality is formed. After the formation of the Josephson junctions, the two-layer mask is removed. At the same time, at each stage of the formation of transitions, the creation of “ideal” interfaces is a critically important factor - even atomic pollution radically worsen the parameters of the manufactured circuits as a whole.

The FMN has developed an aluminum technology for the formation of Al–AlOx–Al Josephson junctions with minimum dimensions in the range of 100–500 nm and reproducibility of the critical current transition parameters no worse than 5%. Ongoing technological research is aimed at finding new materials, improving the technological operations of forming vias, approaches for integrating with new route technological processes and increasing the reproducibility of manufacturing vias by increasing their number to tens of thousands of pieces per chip.

Josephson qubits (quantum two-level system or “artificial atom”) are characterized by a typical splitting of the energy of the ground excited state into levels and are controlled by standard microwave pulses (external tuning of the distance between levels and eigenstates) at a splitting frequency in the gigahertz range. All superconducting qubits can be divided into charge (electric charge quantization) and flux qubits (magnetic field or phase quantization), and the main criteria for the quality of qubits in terms of quantum computing are relaxation time (T1), coherence time (T2, dephasing) and time to perform one operation. The first charge qubit was realized in the laboratory of the NEC company (Japan) by a research team led by Y. Nakamura and Y. Pashkin (Nature 398, 786–788, 1999). Over the past 15 years, the coherence times of superconducting qubits have been improved by leading scientific groups by nearly six orders of magnitude from nanoseconds to hundreds of microseconds, enabling hundreds of two-qubit operations and error correction algorithms to be implemented.


At the REC FMN, we develop, manufacture and test charge and flux qubits of various designs (flux, fluxoniums, 2D/3D transmons, X-mons, etc.) with aluminum Josephson junctions, we conduct research on new materials and methods for creating highly coherent qubits aimed at improving the main parameters of superconducting qubits.

The center's specialists are developing thin-film transmission lines and high-quality superconducting resonators with resonant frequencies in the range of 3-10 GHz. They are used in the elements of quantum circuits and memory for quantum computing, providing control of individual qubits, communication between them and reading their states in real time. The main task here is to increase the quality factor of the created structures in the single-photon mode at low temperatures.

In order to improve the parameters of superconducting cavities, we study various types of their structures, thin film materials (aluminum, niobium, niobium nitride), film deposition methods (electron beam, magnetron, atomic layer) and topology formation (explosive lithography, various etching processes) on various substrates (silicon, sapphire) and integration of various materials in one scheme.

Scientific groups from various fields of physics have been studying the possibility of coherent interaction (connection) of quantum two-level systems with quantum harmonic oscillators for a long time. Until 2004, such an interaction could only be achieved in experiments in atomic physics and quantum optics, where a single atom coherently exchanges a single photon with single-mode radiation. These experiments have made huge contribution in understanding the mechanisms of interaction of light with matter, quantum physics, the physics of coherence and decoherence, and also confirmed the theoretical foundations of the concept of quantum computing. However, in 2004, a scientific group led by A. Wallraff (Nature 431, 162-167 (2004)) demonstrated for the first time the possibility of coherent coupling of a solid-state quantum circuit with a single microwave photon. Thanks to these experiments and after solving a number of technological problems, the principles for creating controlled solid-state two-level quantum systems were developed, which formed the basis of a new paradigm of quantum electrodynamics (QED) circuits that have been actively studied in recent years.


QED schemes are extremely attractive both from the point of view of studying the features of the interaction of various elements of quantum systems and the creation of quantum devices for practical use. We explore Various types schemes of interaction of elements of QED circuits: effective coupling of qubits and control elements, circuit solutions for entanglement of qubits, quantum nonlinearity of interaction of elements with a small number of photons, etc. These studies are aimed at forming a base of practical experimental methods for creating multi-qubit quantum integrated circuits.

The main goal of research in this direction in the FMN is to develop a technology for creating a metrological, methodological and algorithmic basis for implementing the Shor and Grover algorithms using multi-qubit quantum circuits and demonstrating quantum acceleration compared to classical supercomputers. This extremely ambitious scientific and technical task requires solving a huge number of theoretical, physical, technological, circuitry, metrological and algorithmic problems, which are currently being actively worked on by leading scientific groups and IT companies.


Research and development in the field of quantum computing is carried out in close cooperation with the leading Russian research teams of the Institute of Solid State Physics of the Russian Academy of Sciences, MISIS, Moscow Institute of Physics and Technology, NSTU and RCC under the guidance of world-famous Russian scientists.

mob_info