The growing number of cases indicates that large language model (LLM) brings transformative advancements while raising privacy concerns. Despite promising recent
surveys proposed in the literature, there is still a lack of comprehensive analysis dedicated to text privacy specifically for LLM.
To remedy the problems, a research team led by Jie WU published their
new research on 15 October 2025 in
Frontiers of Computer Science co-published by Higher Education Press and Springer Nature.
The team conducted an in-depth investigation into privacy issues within LLMs, providing a detailed analysis of five privacy issues and solutions in LLM training and invocation. Additionally, the team delved into three privacy-centric research focuses in LLM application that were not mentioned previously. Based on the investigation, the research discussed further research directions and provided insights into LLM native security mechanisms with view: LLM privacy research is in the technical exploration phase, and there exists a certain gap from practical application.
Future work can focus on committing to ongoing monitoring of new research and continuous refinement of the work. The team hopes this paper provides researchers and practitioners with a comprehensive understanding to better address the privacy challenges that LLMs may encounter in real-world applications.
DOI:
10.1007/s11704-024-40583-8