arXiv Analytics

Sign in

arXiv:2407.20529 [cs.LG]AbstractReferencesReviewsResources

Can LLMs be Fooled? Investigating Vulnerabilities in LLMs

Sara Abdali, Jia He, CJ Barberan, Richard Anarfi

Published 2024-07-30Version 1

The advent of Large Language Models (LLMs) has garnered significant popularity and wielded immense power across various domains within Natural Language Processing (NLP). While their capabilities are undeniably impressive, it is crucial to identify and scrutinize their vulnerabilities especially when those vulnerabilities can have costly consequences. One such LLM, trained to provide a concise summarization from medical documents could unequivocally leak personal patient data when prompted surreptitiously. This is just one of many unfortunate examples that have been unveiled and further research is necessary to comprehend the underlying reasons behind such vulnerabilities. In this study, we delve into multiple sections of vulnerabilities which are model-based, training-time, inference-time vulnerabilities, and discuss mitigation strategies including "Model Editing" which aims at modifying LLMs behavior, and "Chroma Teaming" which incorporates synergy of multiple teaming strategies to enhance LLMs' resilience. This paper will synthesize the findings from each vulnerability section and propose new directions of research and development. By understanding the focal points of current vulnerabilities, we can better anticipate and mitigate future risks, paving the road for more robust and secure LLMs.

Comments: 14 pages, 1 figure. arXiv admin note: text overlap with arXiv:2403.12503
Categories: cs.LG, cs.CR
Related articles: Most relevant | Search more
arXiv:2301.10226 [cs.LG] (Published 2023-01-24)
A Watermark for Large Language Models
arXiv:2306.04634 [cs.LG] (Published 2023-06-07)
On the Reliability of Watermarks for Large Language Models
arXiv:2309.00254 [cs.LG] (Published 2023-09-01)
Why do universal adversarial attacks work on large language models?: Geometry might be the answer