YouTube’s algorithm, designed to keep users watching, is under scrutiny for promoting misinformation and conspiracy theories. The algorithm’s primary objective is maximising user engagement, often favouring sensationalist content that fuels outrage and fear. This can lead to the spread of false information, as seen in the promotion of videos claiming the earth is flat or denying climate change.
Critics argue that YouTube’s algorithm is not neutral, and its design can inadvertently promote harmful content. The algorithm’s preference for content that keeps users engaged for longer periods, regardless of its veracity, has led to a rise in conspiracy theories and extreme views.
The algorithm also tailors recommendations based on viewing history, creating a ‘filter bubble’ that reinforces existing beliefs. This can polarise viewers, pushing them towards more extreme content and viewpoints.
YouTube has acknowledged these issues and is working on solutions. Yet, the company is facing a challenge in balancing the need for user engagement, which drives advertising revenue, with the responsibility of preventing the spread of misinformation.
This situation highlights the broader issue of how technology platforms’ design choices can influence public discourse and understanding. It raises the question of whether these platforms have a social responsibility to ensure the accuracy of the content they promote.
Go to source article: https://www.theguardian.com/technology/2018/feb/02/how-youtubes-algorithm-distorts-truth