New Attack Method Enhances Security Risks for Quantum Neural Networks

Recent advancements in quantum computing have raised concerns regarding the security of Quantum Neural Networks (QNNs), particularly as they become more widely available through services like QNN-as-a-Service (QNNaaS). A new paper titled "Quantum Neural Network Extraction Attack via Split Co-Teaching" by Zhenxiao Fu and Fan Chen introduces a novel approach to model extraction attacks that targets these QNNs.

The authors highlight that existing methods for training substitute QNNs often fall short in real-world scenarios, particularly in Noisy Intermediate-Scale Quantum (NISQ) environments where noise and cost constraints can significantly impact performance. Their proposed method, termed "split co-teaching," utilizes variations in data labeling to enhance extraction accuracy. This technique involves splitting queried data based on noise sensitivity and employing co-teaching strategies to improve the overall effectiveness of the attack.

Experimental results indicate that the split co-teaching approach outperforms traditional extraction methods by margins ranging from 6.5% to 9.5%, and it also surpasses existing QNN extraction techniques by 0.1% to 3.7% across various tasks. This advancement underscores the need for enhanced security measures in quantum computing, as the ability to extract models could lead to significant risks for proprietary algorithms and intellectual property.

The findings of this research are crucial for developers and users of quantum technologies, as they highlight vulnerabilities that could be exploited in the evolving landscape of quantum computing. The full paper can be accessed at arXiv:2409.02207.