David Wakeham: QML Researcher

I'm a quantum machine learning (QML) researcher in Maria Schuld's group at Xanadu. I'm interested in quantum algorithms, statistical learning theory, and symmetry. QML is a research area that explores the interplay of ideas from quantum computing and machine learning. This goes in both directions. We can ask if the unique abilities of quantum computers can help us train machine learning models faster, or on quantum data where they are a better fit than classical models. On the other hand, classical machine learning can give us insights into quantum algorithms and the capabilities of quantum computers for generalization.

This page give an informal overview of QML. For a more detailed introduction to quantum computing, see this page. Once you've read that, you may want to check out the companion tutorial on QML.

Image/style credits to Tarik Elkhateeb and the PennyLane.ai What is QML? page.

The two-way street


The success story of modern deep learning is also the story of hardware it runs on: the parallel GPUs and architectural innovations which allow an LLM, for instance, to learn on an internet-sized dataset. In QML, it is natural to start with the hardware at our disposal, namely noisy quantum circuits, and ask if the associated architecture is superior for certain tasks. This leads to a class of algorithms called variational quantum circuits (VQC).

ML image

On the other hand, we can use classical tools such as Fourier series and kernel learning to characterize quantum models. This provides important insights into their expressivity, generalization and training mechanics. This shows that QML really is a two-way street!

Performance from NISQ to ISQ


One advantage of vartional circuits is that they run on the devices we have now, and can be easily simulated. Because these devices and simulations are small, we cannot rely on theoretical arguments which only hold for very large quantum computers. Instead, we need to use benchmarks—performance on real datasets—to see how they stack up, as is standard practice in classical ML.

ML image

The quantum devices you can find in the lab right now are error-prone and modest in size. They can implement small VQCs and prove quantum advantage for other tasks, but we don't expect them to provide useful applications just yet. In the not-too-distant future, we hope these Noisy Intermediate-Scale Quantum (NISQ) computers will be replaced by Intermediate-Scale Quantum (ISQ) ones, which are small but fault-tolerant. Finding useful QML algorithms for these devices is an open problem.

Symmetry and inductive bias


If we defeat these constraints (on size and noise), we will be rewarded with the "holy grail": a Fault-Tolerant Quantum Computer (FTQC), where we can run large-scale quantum computations with negligible error. But even if we had such a device, what would we do with it? Variational circuits come from asking: what can we do with this hardware? The question now is: what do quantum computers do best? This is a very different beast.

Quantum complexity theory suggests that quantum computers are best at discovering hidden symmetries. The quantum computer queries multiple items, attaches a phase to each, and interferes these phases cleverly to extract the result. Shor's algorithm for breaking RSA is a famous example.

ML image

It turns out that quantum computers can use similar techniques to learn hidden symmetries from data. Many real-world problems display approximate symmetry, so we expect this not only to be fast, but useful! Turning things around, what does this teach us about quantum computing? Using tools from ML, it tells us they have an inductive bias, certain guesses they like to make more than others. Characterizing these biases will tell us what other problems quantum computers might be good at learning, and forms an exciting area for future research.


โ†“QC for dummies ยท โ†“QML for dummies ยท ๐”ฅ๐”ข๐”ญ๐”ฑ๐”ž๐”ฏ๐” ๐”ฅ