Home Science YouTube Study – The YouTube algorithm doesn’t radicalize you, people radicalize themselves

YouTube Study – The YouTube algorithm doesn’t radicalize you, people radicalize themselves

YouTube Study - The YouTube algorithm doesn't radicalize you, people radicalize themselves

A study proves that it is not necessary for YouTube’s algorithm to work together to make people choose radical and conspiratorial videos

If you search for videos about climate change, the Holocaust or vaccines on YouTube, the algorithm immediately starts presenting recommendations from deniers, or so you thought so far.

About a third of Spaniards and a quarter of Americans get their information from YouTube, one of the largest online media platforms in the world with billions of users and hours of content. In recent years, a popular narrative has emerged in the media suggesting that videos from highly partisan and conspiracy theory-promoting YouTube channels are radicalizing young people and that YouTube’s recommendation algorithm is leading users down a path of increasingly radical content.

However, this is the conclusion of a new study from the Computational Social Sciences Laboratory (CSSLab) at the University of Pennsylvania It is primarily the user’s own interests and political preferences that determine what they watch. If recommendation features actually have any influence on users’ media diet, it is moderating.

Homa Hosseinmardi, associate research scientist at CSSLab and lead author of the study, says that “on average, using the recommendation system exclusively leads to less partisan consumption.”

This is how YouTube recommendations work

YouTube’s recommendation algorithm uses a combination of factors to determine which videos to suggest to each user, adapting to their interests and behaviors on the platform. It analyzes the history of views and searches, the way the user interacts with the videos (through likes, comments, subscriptions), the time spent watching each content and the frequency with which they interacted with certain types of videos. Additionally, consider content relevance through titles, descriptions, and metadata, and pay attention to recent and popular videos.

Although much of this process is automated, YouTube also manually adjusts its recommendations based on user feedback to improve the quality of suggestions and avoid problematic content. The ultimate goal is to provide a personalized experience that entertains users and extends the time they spend on the platform, although the specifics of the algorithm are complex and constantly evolving.

Various reports and analyzes suggest that the algorithm may favor content that generates high engagement rates, such as: B. Watch time and user engagement, which in some cases can include videos with controversial or radical content. However, this does not necessarily mean that the algorithm has an inherent preference for this type of content.

Analyzing YouTube suggestions with bots

To determine the true impact of YouTube’s recommendation algorithm on what users see, researchers created bots that followed the engine’s recommendations or ignored them entirely. They used the YouTube view history of 87,988 real users collected from October 2021 to December 2022.

Hosseinmardi and co-authors wanted to unravel the complex relationship between user preferences and the recommendation algorithm, a relationship that evolves with each video viewed. They assigned these bots individual YouTube accounts so they could track their watch history, and estimated the proportion of what they watched based on the metadata associated with each video.

In two experiments, the bots went through a “learning phase” in which they watched the same sequence of videos to ensure they were all presenting the same preferences to YouTube’s algorithm. They then split into groups: some continued to track the viewing history of the actual user they were based on; others were described as experimental “counterfactual bots” that followed specific rules to separate user behavior from algorithmic influence.

In the first experiment, the control bot continued to watch videos from the user history after the learning phases, while the counterfactual bots deviated from users’ actual behavior and only selected videos from the recommended list without taking user preferences into account.

Human users, more radical than bots

The researchers found that counterfactual bots, on average, consumed less partisan content than the corresponding real user, a stronger finding for heavier consumers of partisan content.

In the second experiment, the researchers estimated the YouTube recommender’s “forgetting time,” i.e. how long it takes for the algorithm to stop recommending problematic content to previously interested users after they have lost interest.

The results showed that, on average, sidebar recommendations shifted toward moderate content after about 30 videos, while homepage recommendations tended to adjust less quickly.

Hosseinmardi concludes that while YouTube’s recommendation algorithm has been accused of misleading its users into conspiracies, we should not overlook the fact that users have significant agency in their actions and could have seen the same content or even worse without it that there was a recommendation.

Looking forward, the researchers hope that others will adopt their method to study AI-mediated platforms where user preferences and algorithms interact to better understand the role that algorithmic content recommendation engines play in our daily lives.

The results are published in the journal Proceedings of the National Academy of Sciences.

REFERENCE

Causal estimation of the impact of YouTube’s recommendation system using counterfactual bots

No Comments

Leave A Reply

Please enter your comment!
Please enter your name here

Exit mobile version