First select one of the available problem instances and algorithms (or use the default initial selections). Then perform one or more steps. The various displays show different aspects of the algorithm's behavior, and allow comparing different algorithms on a single instance.

These search problems involve *n* Boolean variables, so an assignment to all variables corresponds to a string of *n* bits. Such problems have 2^{n} assignments, or search states. A problem instance is defined by associating a cost with each state; solutions are those states with zero cost. The distance between two states is the number of variables they assign different values, and equals the Hamming distance between the corresponding bit strings.

The available problem instances include examples of the NP-complete satisfiability problem. Specifically instances of 3-SAT problems with *m* clauses. The examples include both relatively easy and hard instances, and instances with various numbers of solutions. The descriptions of these instances are read from data files located on the web server. If the files are not available, e.g., because the server is down, attempting to create the instance gives an error message.

The "Hamming" instances are problems with high symmetry, illustrating the extremes of very easy and very hard cases. For the easy case, the cost of each state is just the number of 1-bits it contains. In the hard case, the cost is one more than the number of 1-bits, except that the state with all 1's is given cost 0.

The available algorithms include

- Unstructured search (amplitude amplification).
- A heuristic method based on problem structure.
- Two variants of an algorithm making larger changes than is consistent with the adiabatic limit.
- A discrete version of adiabatic quantum computing.

Quantum search involves superpositions of all search states, specified by a vector associating a complex number (called an *amplitude*) with each state. Algorithms change this vector through a series of steps, each corresponding to multiplication by a unitary matrix.

Most of the algorithms use a different matrix for each step. These matrices are defined in terms of an initial and final matrix, and the number of steps required to change between these forms. This number, denoted as *j*, is specified in the text field next to the algorithm selection. Different values of *j* produce different matrices for the steps, and hence different algorithms.

Generally the matrix choices are meant for an algorithm that will be run for *j* steps. The simulation allows the algorithm to run for any number of steps, not necessarily equal to *j*. This can illustrate the possible benefit of stopping the algorithm early (e.g., to minimize expected search cost over repeated trials rather than maximizing the probability to find a solution with a single trial of the algorithm).

The algorithms used here are not optimal for these instances. For any particular instance, it is possible to design an algorithm that can solve it in just one step. However, in practice, there will not be sufficient prior knowledge about the instance to make this computationally feasible.

Press the "step" button to run the algorithm.

By default, each press performs one step. To perform multiple steps, enter the number of steps in the field below the "step" button before pressing it.

While running multiple steps, the "step" button changes to "STOP": another press stops the evaluation. For larger problems, displaying the amplitudes for all states can be slow, in which case it may take a while to stop. The other display choices update more rapidly.

The "back" button reverses the algorithm by one step.

The display tabs select the type of information to show about the algorithm behavior:

*amplitudes:*Show amplitudes in the complex plane for each state, colored according to the selected state property (e.g., state costs). Also show the total probability in states grouped according to this property (i.e., the sum, over all states with the same property value, of the squared magnitude of the state's amplitude). In all cases, the solutions are shown in black.The bottom of the display shows how much the normalization of the amplitudes differs from one (as an indication of numerical precision), the probability in all solution states and, if appropriate, the expected value of the selected property.

A popup menu adjusts the plot, e.g., showing normalized amplitudes instead of relative to the plot area, and including a global phase factor for the amplitudes to keep the amplitude of the first solution state along the positive real axis (this solution's amplitude is shown in gray instead of black).

*probability:*Density plot of probability in states grouped by two selected properties, e.g., cost and distance.*history:*Behavior of algorithms, as a function of the number of steps, for the current problem instance. The current step is marked on the curve(s) corresponding to the current algorithm.A popup menu toggles the plot x-axis between the actual number of steps and the fraction relative to the maximum taken for each algorithm.

The values correspond to the behavior that would be seen if the algorithm were terminated at the corresponding number of steps by observing the state of the quantum computation. The available behaviors are:

- probability to find a solution, ranging between 0 and 1
- expected cost of the observed state, ranging between 0 (when the probability for a solution is 1) and that for random selection, i.e., the average cost of all search states
- average total search cost, ranging between 0 and slightly above the average cost of random selection with replacement (i.e., the number of search states divided by number of solutions). This cost measure is the average number of steps required to find a solution, i.e., the number of algorithm steps divided by the probability to find a solution with that many steps (accounting for the average number of times the algorithm must be repeated to find a solution).

*summary:*Summary of behavior of algorithms examined for the current problem instance. The table shows the minimum average total search cost for each algorithm (over the range of steps for which the algorithm was evaluated).

The available state properties are:

*cost:*the cost associated with a state, e.g., the number of conflicting clauses in a 3SAT problem*distance:*the Hamming distance to the nearest solution*type:*the local search character of the state, based on the minimum neighbor(s), i.e., those with the lowest cost, and plateaus (states with no lower cost neighbors but at least one same-cost neighbor)The values somewhat arbitrarily group states based on whether local search will, may, or won't lead to a solution:

- 0: a solution
- 1: local search leads to a solution
- 2: a plateau with all states at its edge leading to a solution
- 3: a boundary: some minimum neighbors lead to a solution, others do not
- 4: a plateau, with some edge states leading to a solution, others not
- 5: local search does not lead to a solution
- 6: a plateau, no edges lead to a solution
- 7: a local minimum: a nonsolution state with all neighbors having higher cost
- 8: a plateau with edges having no lower cost neighbors

The image schematically shows the main types for a one-dimensional search problem. The plateau types, 2, 4, 6, 8, are analogous to types 1, 3, 5, 7, respectively. These groups do not take into account the difficulty of local search on plateaus (where neighbor costs give no guide, so the difficulty depends on the number of states in a plateau). |

Tad Hogg