Abstract
Large Language Models (LLMs) have revolutionized Natural Language Processing (NLP) but pose risks of inadvertently exposing copyrighted or proprietary data, especially when such data is used for training but not intended for distribution. Traditional methods address these leaks only after content is generated, which can lead to the exposure of sensitive information. This study introduces a proactive approach: examining LLMs’ internal states before text generation to detect potential leaks. By using a curated dataset of copyrighted materials, we trained a neural network classifier to identify risks, allowing for early intervention by stopping the generation process or altering outputs to prevent disclosure. Integrated with a Retrieval-Augmented Generation (RAG) system, this framework ensures adherence to copyright and licensing requirements while enhancing data privacy and ethical standards. Our results show that analyzing internal states effectively mitigates the risk of copyrighted data leakage, offering a scalable solution that fits smoothly into AI workflows, ensuring compliance with copyright regulations while maintaining high-quality text generation. The implementation is available1 ©2025 Association for Computational Linguistics.
| Original language | English |
|---|---|
| Title of host publication | Findings of the Association for Computational Linguistics |
| Subtitle of host publication | EMNLP 2025 |
| Publisher | Association for Computational Linguistics |
| Pages | 10786-10807 |
| Number of pages | 22 |
| ISBN (Print) | 9798891763357 |
| DOIs | |
| Publication status | Published - Nov 2025 |
| Externally published | Yes |
| Event | 30th Conference on Empirical Methods in Natural Language Processing (EMNLP 2025) - Suzhou, China Duration: 4 Nov 2025 → 9 Nov 2025 https://aclanthology.org/volumes/2025.emnlp-main/ |
Publication series
| Name | EMNLP - Conference on Empirical Methods in Natural Language Processing, Findings of EMNLP |
|---|
Conference
| Conference | 30th Conference on Empirical Methods in Natural Language Processing (EMNLP 2025) |
|---|---|
| Abbreviated title | 30th EMNLP |
| Place | China |
| City | Suzhou |
| Period | 4/11/25 → 9/11/25 |
| Internet address |
Publisher's Copyright Statement
- This full text is made available under CC-BY 4.0. https://creativecommons.org/licenses/by/4.0/
Fingerprint
Dive into the research topics of 'ISACL: Internal State Analyzer for Copyrighted Training Data Leakage'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver