引用本文: |
-
王慧,杨晨曦,韩冀中,戴娇,郭涛,周熙,张宏悦.基于大语言模型引导的推荐算法信息茧房缓解方法[J].信息安全学报,已采用 [点击复制]
- Wang Hui,Yang Chenxi,Han Jizhong,Dai Jiao,Guo Tao,Zhou Xi,Zhang Hongyue.A Method for Alleviating Information Cocoon in Recommendation Algorithms Guided by Large Language Models[J].Journal of Cyber Security,Accept [点击复制]
|
|
摘要: |
信息茧房问题是算法安全领域中的重要挑战。在当今数字时代,当推荐算法过度聚焦于用户的既有兴趣偏好时,用户可能被限制在单一化的信息环境中,无法接触到多元化的观点和价值观。这种现象不仅导致观点极化,还可能诱发舆情安全等更广泛的社会问题。为此现有研究尝试通过引入大语言模型(Large Language Models, LLMs)来提升推荐系统的多样性,以缓解数据稀疏和分布不均的问题。然而,直接利用LLMs生成数据来填充数据集的方法存在明显的局限性,即LLMs易出现幻觉问题且难以精准捕捉推荐场景中的协同信息。针对这一挑战,本文提出一种基于大语言模型引导推荐算法的信息茧房缓解的新方法,即不直接使用大模型生成推荐数据,而是基于大语言模型的引导提升推荐效果的多样性。具体而言,该方法包含两个模块:语义增强模块和域外项目生成模块。首先,语义增强模块利用 LLMs 的推理和总结能力,从用户和项目档案中提取含有用户兴趣和项目受众的语义表示;其次,域外项目生成模块将该语义向量视为伪标注,指导条件生成对抗网络生成符合真实分布的意外项目,从而有效拓展推荐的多样性。在三个真实世界数据集Amazon Beauty、Amazon Digital Music和Amazon Books上的实验表明,该方法在保持推荐准确性的同时,提升了推荐多样性,有效缓解了信息茧房。通过跨域场景下的拓展实验,进一步证明了本文方法的泛化性和适用性。 |
关键词: 信息茧房 大语言模型 多样性 推荐系统 |
DOI: |
投稿时间:2025-01-23修订日期:2025-06-20 |
基金项目: |
|
A Method for Alleviating Information Cocoon in Recommendation Algorithms Guided by Large Language Models |
Wang Hui, Yang Chenxi, Han Jizhong, Dai Jiao, Guo Tao, Zhou Xi, Zhang Hongyue
|
(Institute of Information Engineering) |
Abstract: |
The issue of information cocoons is a significant challenge in the field of algorithmic safety. In today's digital age, when recommendation algorithms excessively focus on users' existing interests and preferences, users may be confined to a monolithic information environment, unable to access diverse viewpoints and values. This phenomenon not only leads to the polarization of opinions but also poses broader social issues such as public opinion safety. To address this, existing research attempts to enhance the diversity of recommendation systems by introducing Large Language Models (LLMs) to alleviate issues of data sparsity and uneven distribution. However, directly using LLMs to generate data for filling datasets has obvious limitations, as LLMs are prone to hallucination and struggle to accurately capture collaborative information in recommendation scenarios. In response to this challenge, this paper proposes a new method for alleviating information cocoons in recommendation algorithms guided by large language models. Specifically, instead of directly using large models to generate data, the method enhances the diversity of recommendation effects based on the guidance of Large Language Models. The method consists of two modules: the semantic enhancement module and the out-of-domain project generation module. First, the semantic enhancement module utilizes the reasoning and summarization capabilities of LLMs are utilized to extract semantic representations from user interest and project audience text data, generating seman-tically rich profile files. Then, in the out-of-domain project generation module , these semantic vectors are treated as pseudo-labels to guide Conditional Generative Adversarial Networks (CGANs) in generating unexpected items that conform to the real distribution, effectively expanding the diversity of recommendations. Experiments on three real-world datasets, Amazon Beauty, Amazon Digital Music, and Amazon Books, demonstrate that this method improves recommendation diversity while maintaining accuracy, effectively mitigating information cocoons. Through conducting a series of expan-sion experiments within various cross-domain scenarios, we further validate the generalizability and applicability of the proposed method. |
Key words: Information cocoon Large Language Models diversity recommender system |