A research group at TCG CREST (The Chatterjee Group Centres for Research and Education in Science and Technology) has found a way to make a key quantum algorithm run faster by doing something counterintuitive: keeping the wrong answer.
The work, led by Professor Srinivasa Prasannaa, improved the performance of the Harrow–Hassidim–Lloyd (HHL) algorithm, a foundational method for solving systems of linear equations on quantum computers. The approach, called Psi(ψ)HHL, reduces a major runtime bottleneck without adding extra quantum hardware or circuit depth.
“At the end of the day, what you’re doing is solving problems,” Prof Prasannaa tells AIM. “A quantum algorithm is what tells you how to change those qubit strings towards solving a specific problem, just as a classical algorithm tells you how to change bit strings.”
The breakthrough matters because HHL sits at the heart of many proposed quantum applications, from chemistry simulations to optimisation.
While HHL, proposed in 2009 by Aram W. Harrow (University of Bristol), Avinatan Hassidim (Cambridge), and Seth Lloyd (Cambridge), promised exponential speedups in theory, one practical factor kept holding it back.
It was the condition number, denoted by kappa (κ), which is a measure of the ease or hardness of inverting the matrix that occurs in the system of linear equations. When large, it can dramatically increase the number of times the algorithm is executed.
“The runtime goes as kappa cubed in HHL,” Prof Prasannaa explains. “While it is polynomial, in practical calculations, it is going to be very difficult.”
Why HHL Struggles in Practice
HHL solves an equation form that appears across science and engineering. The difficulty lies in how quantum computers produce results. Unlike classical machines, quantum systems rely on probabilities and repeated measurements.
In all quantum algorithms, you have an extra step called measurement,” he notes. “Because the outcome of a measure is going to be random, you will have to repeat any quantum circuit execution several times before you build enough statistics.”
When kappa is large, the probability of measuring the correct result drops sharply. That forces researchers to repeatedly run the same quantum circuit. This creates a practical problem. Researchers do not know the condition number in advance, and calculating it classically is hard.
“You’re not supposed to know kappa before you start executing an HHL algorithm,” Prof Prasannaa clarifies. “That’s why you went to a quantum computer in the first place.”
Over the years, other researchers tried to fix this by adding complex modules to HHL. These reduced the dependence on kappa but dramatically increased circuit size. “These methods came with having many, many more gates in your circuits,” he adds.
For today’s noisy quantum hardware, as well as those to come over the next several years, that trade-off makes many solutions impractical.
The Accident That Led to PsiHHL
PsiHHL emerged while the team worked on a different problem. The original goal was to speed up the calculation of molecular properties using HHL, not to redesign the algorithm itself. “A PhD student had joined us, and I thought he could just do a low-hanging fruit kind of a problem,” Prof Prasannaa recalls.
That student, Peniel Bertrand Tsemo, kept getting inconsistent results. “Then we found out he’s getting bad answers because he’s not repeating the experiment enough number of times.”
As the team investigated, they realised that kappa grew, although it did so slowly, for larger molecular systems. That made HHL increasingly inefficient. The group explored standard fixes, including amplitude amplification for boosting the probability of measuring desired states, but hit a wall.
The turning point came unexpectedly.
A post-doctoral fellow, Akshaya Jayashankar, “had this very interesting idea of subtracting two different signals,” Prof Prasannaa recalls. “It was just an intuition.” Instead of focusing only on the correct measurement outcome, the team tried something unusual. They deliberately selected the wrong outcome first.
“What we do is in our first HHL execution, we completely ignore the correct answer and instead deliberately select the wrong answer,” he explains.
In a second run, they introduced a small tweak that produced a mixed signal of both wrong and right answers. Subtracting the two cancelled the unwanted result and isolated the correct one.
This simple change had a major effect. By running HHL twice, PsiHHL reduced the runtime scaling from kappa cubed to kappa for ill-conditioned systems.
What This Means for Quantum Advantage
The result does not mean quantum computers will suddenly outperform classical machines across the board. Prof Prasannaa remains cautious. “Just because an algorithm promises an exponential advantage in principle doesn’t mean you can pick any real-world application.”
Quantum advantage depends on many factors, including algorithm design, hardware maturity, and the problem structure. He presents a three-piece solution.
“There has to be maturity on the algorithm side, on the software side and on the hardware side.” He also warns against underestimating classical systems, calling them incredibly mature.
Still, PsiHHL addresses a real bottleneck in quantum computing research. By improving algorithm efficiency without adding hardware cost, it aligns better with the early fault-tolerant era quantum machines.
“If this goes on, I would be more conservative and say you have to wait at least another 10 years before you actually start seeing [quantum advantage],” Prof Prasannaa notes.
For now, the work highlights how progress in quantum computing does not always come from bigger machines. Sometimes, it comes from rethinking what to do with a wrong answer.
The post These Indian Professors Fixed a 16-Year-Old Quantum Algorithm With the Wrong Answer appeared first on Analytics India Magazine.