Title: Stealing User Prompts from Mixture of Experts
Authors: Itay Yona, Ilia Shumailov, Jamie Hayes, Nicholas Carlini
Published: 30th October 2024 (Wednesday) @ 10:25:35
Link: http://arxiv.org/abs/2410.22884v1

Abstract

Mixture-of-Experts (MoE) models improve the efficiency and scalability of dense language models by routing each token to a small number of experts in each layer. In this paper, we show how an adversary that can arrange for their queries to appear in the same batch of examples as a victim’s queries can exploit Expert-Choice-Routing to fully disclose a victim’s prompt. We successfully demonstrate the effectiveness of this attack on a two-layer Mixtral model, exploiting the tie-handling behavior of the torch.topk CUDA implementation. Our results show that we can extract the entire prompt using queries (with vocabulary size and prompt length ) or 100 queries on average per token in the setting we consider. This is the first attack to exploit architectural flaws for the purpose of extracting user prompts, introducing a new class of LLM vulnerabilities.