**Author:** K. Grace Johnson

**Introduction**

Quantum computing, a field that has been around since the days of Feynman[1], has recently gained broad interest, especially with the publication of quantum supremacy by Google in late 2019[2]. Its promise lies in the potential to perform computational tasks that cannot feasibly be achieved on even the largest classical supercomputers—the so-called *quantum supremacy* which Google claimed. Unlike classical states, quantum states are entangled. A quantum computer consisting of *n* qubits could, in theory, simulate a system with 2* ^{n}* states; problems which scale exponentially on classical machines could be achieved in reasonable time (polynomial) on a quantum machine. Current quantum devices, however, are noisy and therefore not perfectly entangled. Given this, it is important to understand the limitations of current hardware by classically simulating quantum devices and algorithms. What makes quantum algorithms difficult to simulate, and what could that tell us about the algorithm itself? Or, from another angle, how does the entropy of a given problem (i.e. the entanglement of the quantum system) affect our ability to solve it?

**Theory and Methods**

In my research group, we have been developing a classical simulator for a quantum computer that employs a tree representation of a wavefunction to cut down on computational cost[3], [4]. The method on which this is based is called multi-layer multi-configurational time dependent Hartree (ML-MCTDH)[5], an approach used in the theoretical chemistry community to study quantum dynamical systems. A wavefunction **Ψ** approximating a quantum system is represented in a basis recursively defined by a given tree graph, so that **Ψ** is the root node of the tree (see Figure 1). Qubits are represented as leaves in the tree, labeled x_{0} through x_{7} in Figure 1. One can perform a two-qubit operation such as the CNot gate depicted which entangles qubits x_{3} and x_{4}, and only nodes along the path connecting the qubits in the tree (shown in red) need to be considered in the computation of the gate.

The tree representation assumes there is some structure to the wavefunction we want to simulate, so we expect it to work well in low-entropy regimes, such as a simple QFT (quantum Fourier transform) circuit[3]. High-entropy regimes, such as the random circuits studied in the Google paper[2], [6], present more of a challenge. We are interested in simulating quantum algorithms, so examine Shor’s algorithm for integer factorization[7]—this algorithm is important for encryption and is a canonical quantum algorithm in that, on an ideal quantum device, it does in polynomial time what on a classical device would be exponential. A simplified quantum circuit for Shor’s algorithm is shown in Figure 2[8].

**Entropy Analysis and Discussion**

We would like to analyze Shor’s algorithm to see 1) how a canonical quantum algorithm behaves in an entropy context (i.e. which entropy regime does the algorithm belong to), and 2) how well a tree-based approach can simulate a quantum algorithm. We measure the entanglement at each node *p* in the tree as a circuit proceeds using the von Neumann entropy *S ^{ p}*:

is the reduced density operator at a node *p *and *k _{B}* is the Boltzmann constant. (Note, in the figures below we report node perplexity Λ =

*e*instead of entropy directly, as it can be related to the number of basis functions required at node

^{S}*p*to get an accurate description of the wavefunction.)

Figures 3 and 4 show an entropy analysis simulating Shor’s circuit on 22 qubits using a tree with 22 leaf nodes and a roughly binary structure, similar to that shown in Figure 1. In this case, we are factorizing the integer N=15. Of the 22 qubits, only 5 are used to represent N—the particular algorithm[8] we implemented uses 4n + 2 qubits, where n is the number of qubits used to represent integer N. 2n of these qubits are used for the measurement space, as shown in Figure 2. Figure 3 shows a heatmap of relative node entropy for the nodes at the bottom of the tree, i.e. the 22 nodes representing qubits. Nodes 12-22 represent the measurement space, where you can clearly see the maximal entropy (red) following a stair-like pattern with the ten applications of the **U _{a}** gates (from the circuit diagram in Figure 2).

The heatmap in Figure 3 shows only engtanglement at the bottom of the tree—a better picture takes into account entropy at higher nodes. Figure 4 shows a video of the 22-qubit tree as the Shor circuit progresses, with each node colored according to its entropy relative to the highest node entropy observed throughout the circuit. The darker the color, the more local node entropy at that point in the circuit.

From this visualization, we can see that entropy hot spots occur not in the leaves but in the second and third layers of the tree, and that the total entropy of this system increases as the circuit progresses.

The simulation described above took 22s on a 2.9 GHz Intel Core i9 (MacBook Pro). Moving to 32 qubits to factor N=83 (with a similar binary tree structure) took 1796s. This 80x jump is due partly to more gates (increased circuit depth), but mostly because each gate operation takes significantly more time for a larger tree. We can note here again that the tree representation is an approximation to the wavefunction of the system. Even so, we obtain the correct answer from the circuit. More structured trees (with more layers and edges) are more aggressive approximations which we can exploit, trading slightly less accuracy for computational speedup. There are some interesting ideas here about finding the optimal tree for a given quantum algorithm, which I plan to explore further. (How can we use entropy hot spots like those in Figure 4 and optimize the tree structure? If we found the optimal tree for a simulation in polynomial time, would we even need a quantum computer?)

Using these tree structures optimally is all about exploiting structure in the quantum circuit, i.e. where the hot spots are not located. In a truly random circuit, there is no structure to exploit, thus we would expect aggressive tree approximations to fail, and a classical simulation of these circuits to scale exponentially. This is a topic I plan to explore further.

**Conclusion and Outlook**

So, what makes a quantum algorithm difficult to simulate? After an initial analysis of Shor’s algorithm using a tree representation of the wavefunction, I would phrase the answer as a lack of exploitable structure, i.e. high and uniform node entropy. These tree structures can bring classical simulations down to polynomial scaling, but only if optimal structures are found. This project has brought up some very interesting questions about optimization, but has really just scratched the surface of the subject. Entropy will be a useful lens through which to analyze these tree simulations moving forward, and in general to understand the limits of classical simulation of quantum circuits.