【打印本页】      【下载PDF全文】   查看/发表评论  下载PDF阅读器  关闭
←前一篇|后一篇→ 过刊浏览    高级检索
本文已被:浏览 5921次   下载 4738 本文二维码信息
码上扫一扫!
新闻推荐算法可信评价研究
刘总真,张潇丹,郭涛,葛敬国,周熙,王宇航,陈家玓,吕红蕾,林俊宇
分享到: 微信 更多
(中国科学院大学网络空间安全学院 北京 中国 100049;中国科学院信息工程研究所 北京 中国 100093)
摘要:
随着AI、5G、AR/VR等新技术的快速发展,内容类应用如电子商务、社交网络、短视频等层出不穷,导致信息过载问题日益严重。人工智能技术的发展推动了智能算法的爆炸式运用,作为智能算法的一种,推荐算法在大数据、应用场景和计算力的推动下,通过信息过滤技术,为用户提供适应兴趣及行为的个性化及高质量的推荐服务,逐步提高了用户的使用体验、内容分发效率,在一定程度上缓解了信息过载的问题。但推荐算法的潜在偏见、黑盒化特性及内容分发方式也逐渐带来了决策结果不公平性、不可解释性,信息茧房、侵犯用户隐私等安全挑战。如何提高推荐算法的可解释性、公平性、可信程度等越来越受到国内外政府监管部门、产业及学术界的重点关注,推荐系统和推荐算法也由此从发展期进入管制期。为此,本文针对新闻推荐领域,分析推荐算法的稿件画像、用户画像、推荐推送、反馈干预和人工复审等关键要素,围绕推荐算法生态的参与者,如内容生产者、受众、算法模型、新闻平台,从公平性、可解释性和抗抵赖性三个方面提出了一种新闻推荐算法可信评价体系,并进行定量或定性分析。公平性、可解释性和抗抵赖性是正相关关系,当公平性和抗抵赖性越强、可解释程度越高,新闻推荐算法的可信度越高。希望弥补新闻推荐算法领域的可信研究的空白,建立可信推荐算法生态,加速安全推荐系统的建立和推广,同时为智能算法可信研究提供参考,为智能算法的监管和治理提供思路。
关键词:  新闻  推荐算法  可信评价体系  公平性  可解释性  抗抵赖性
DOI:10.19363/J.cnki.cn10-1380/tn.2021.09.12
投稿时间:2021-04-30修订日期:2021-08-08
基金项目:本课题得到中国科学院战略性先导科技专项(C类)项目(No.XDC02060400)。
Trust Evaluation of News Recommendation Algorithms
LIU Zongzhen,ZHANG Xiaodan,GUO Tao,GE Jingguo,ZHOU Xi,WANG Yuhang,CHEN Jiadi,LV Honglei,LIN Junyu
School of Cyber Security, University of Chinese Academy of Science, Beijing 100049, China;Institute of Information Engineering, Chinese Academy of Science, Beijing 100093, China
Abstract:
With the rapid development of new technologies such as AI, 5G, and AR/VR, the applications on content, such as e-commerce, social networks, and short videos et al. have emerged one after another, leading the increasingly serious problem of information overload. The development of artificial intelligence technology has promoted the explosive application of intelligent algorithms. As a kind of intelligent algorithm, driven by big data, application scenarios and computing capability, recommendation algorithms provide the users with personalized and high-quality recommendation services that adapt to their interests and behaviors, which has not only gradually improved the user experience and the efficiency of content distribution, but also alleviated the problem of information overload to a certain extent. However, the potential biases, black-box characteristics and the content distribution methods of recommendation algorithms have gradually brought security challenges such as unfairness and inexplicability on the decision-making results, information cocoon and the infringement of user privacy et al. How to improve the interpretability, fairness, and trust of recommendation algorithms has been paid more and more attention from the regulatory agencies of governments, industries and academia at home and abroad. Therefore, the recommendation systems and recommendation algorithms enter the regulatory period from the development period. To this end, in the news field, by analyzing the key elements of the recommendation algorithm, such as manuscript portraits, user portraits, recommendation, feedback and interventions, and manual reviews, focusing on the participants of the recommendation algorithm ecology, such as the content producers, the audiences, the algorithm models and the news platform, this study proposes a trust evaluation system for the news recommendation algorithms based on fairness, interpretability and anti-denying. At last, we carry out the qualitative or quantitative analysis. Fairness, interpretability, and anti-denying are positively correlated. When the fairness and anti-denying are stronger and the interpretability is higher, the trust of the news recommendation algorithm is higher. It is expected to fill the research gaps in the study of the trust of news recommendation algorithms, establish a trust recommendation algorithm ecology, accelerate the establishment and promotion of secure recommendation systems, provide a reference for research on the trust of intelligent algorithms, and provide better ideas for the supervision and governance of smart algorithms.
Key words:  news  recommendation algorithm  trust evaluation system  fairness  interpretability  anti-denying