This diagram illustrates how the team reduces quantum circuit complexity in machine learning using three encoding methods—variational, genetic, and matrix product state algorithms. All methods significantly reduce circuit depth while preserving accuracy, as shown by the histogram of reduced CNOT gate counts. Credit: Intelligent Computing (2024). DOI: 10.34133/icomputing.0100

Quantum encoding methods could slash circuit complexity in machine learning

by · Tech Xplore

A recent study by researchers from CSIRO and the University of Melbourne has made progress in quantum machine learning, a field aimed at achieving quantum advantage to outperform classical machine learning.

Their work demonstrates that quantum circuits for data encoding in quantum machine learning can be greatly simplified without compromising accuracy or robustness. The research was published Sept. 12 in Intelligent Computing.

The team's results, validated through both simulations and experiments on IBM quantum devices, show that their innovative encoding methods reduced circuit depth by a factor of 100 on average compared to traditional approaches while achieving similar classification accuracies. The findings offer an exciting new pathway for the practical application of quantum machine learning on current quantum devices.

Looking forward, the team aims to scale these models for larger, more complex datasets and explore further optimizations in quantum state encoding and quantum machine learning architecture design.

One of the key obstacles to efficient quantum machine learning has been encoding classical data into quantum states, a computationally challenging task requiring deeply entangled circuits. To overcome this, the team introduced three encoding methods that approximate the quantum state of the data using much shallower circuits.

These methods—matrix product state, genetic, and variational algorithms—maintained classification accuracy on the MNIST image dataset and two others while enhancing resilience against adversarial data manipulation.

Each method uniquely approximated quantum state encoding classical data to enable efficient state preparation:

  1. Matrix product state encoding: This approach uses tensor networks to create quantum states that can be sequentially disentangled. This structure allows for quantum states with low entanglement to be prepared with a small number of Controlled-NOT, or CNOT gates, further reducing complexity.
  2. Genetic algorithm for state preparation: Inspired by evolutionary processes, this approach optimizes the state preparation process by generating various circuit configurations, selecting the most efficient, and minimizing the number of CNOT gates, thus making the circuits more resistant to noise.
  3. Variational coding: This method utilizes trainable parameters within a layered circuit structure, allowing for the quantum states to reach a target accuracy in fewer layers. This reduces the need for extensive entangling operations and typically lowers computational costs.

The work aligns with broader goals in quantum machine learning to build efficient, reliable quantum models for areas such as image recognition, cybersecurity, and complex data analysis. The reduction in circuit depth is critical for achieving practical quantum machine learning on current devices, which are often limited by gate fidelity and qubit count.

The increased robustness of these models to adversarial attacks opens up new possibilities for secure quantum machine learning applications in sectors where resilience to tampering is essential.

More information: Maxwell T. West et al, Drastic Circuit Depth Reductions with Preserved Adversarial Robustness by Approximate Encoding for Quantum Machine Learning, Intelligent Computing (2024). DOI: 10.34133/icomputing.0100

Provided by Intelligent Computing