Facebook is developing a new artificial intelligence system to better understand and manage the vast amount of content on its platform. The company’s Responsible AI team is working on the project, which aims to tackle issues such as misinformation and hate speech. The system will use a method called reinforcement learning from human feedback (RLHF) to improve its understanding of context and nuance in posts.
The new AI system will be trained on data collected from human reviewers. These reviewers will rank different versions of the same piece of content in order of acceptability. The AI will then use this feedback to learn how to moderate similar content in the future.
However, the system is not perfect. It may struggle with understanding cultural nuances and could potentially be manipulated by users. To counter these issues, Facebook plans to improve the AI’s transparency and allow users to customise how much AI intervention they want on their feed.
Facebook’s new AI system is part of a broader push by the company to address criticism about the spread of misinformation and harmful content on its platform. It’s a significant development in the ongoing battle against online misinformation and hate speech.
Go to source article: https://www.technologyreview.com/2021/03/11/1020600/facebook-responsible-ai-misinformation/