In a new Mozilla study, thousands of crowdsourced volunteers reported that YouTube’s recommendation algorithm referred them to videos that they regretted watching. Seventy-one percent of such views were of videos recommended by YouTube’s artificial intelligence, which is designed to keep users glued to the app. YouTube’s A.I. also recommended users watch videos that it later removed from the site, the report said. About 9% of the recommended “regrettable” uploads, which amounted to 189 videos, were eventually removed from the site. “When it’s actively suggesting that people watch content that violates YouTube’s policies, the algorithm seems to be working at odds with the platform’s stated aims, their own community guidelines, and the goal of making the platform a safe place for people,” said Mozilla’s senior manager of advocacy, Brandi Geurkink.
Mozilla’s study discovered that videos that the volunteers flagged as regrettable—falling into categories such as “misinformation, violent or graphic content, hate speech, and spam/scams”—appeared to surge in popularity, receiving 70% more views per day when compared to the non-reported content that the respondents clicked on. In the past, YouTube’s biggest controversies have been algorithm related, particularly in cases when users follow the algorithm down a far-right rabbit hole. Some users reportedly began their descent by watching seemingly inoffensive content but were progressively pushed in the direction of radical and conspiratorial content.
In response to Mozilla, YouTube asserted that users are pleased with the site’s algorithmic suggestions. “We constantly work to improve the experience on YouTube and over the past year alone, we’ve launched over 30 different changes to reduce recommendations of harmful content,” the company told NBC News in a statement. “Thanks to this change, consumption of borderline content that comes from our recommendations is now significantly below 1 percent.” And YouTube’s algorithm has certainly spelled success for the site, as 70% of the viewing time on YouTube is spent on videos that it referred its users to.
To combat these algorithmic trends, Mozilla recommended that individual users review their personal YouTube history and delete content or searches that they feel could negatively influence the platform’s A.I. Mozilla went on to call for YouTube show more transparency by instituting an independent audit of its algorithm, while suggesting that YouTube provide policymakers and researchers with the information necessary to understand and question its recommendation system. In a comment to Insider, YouTube did announce that it is considering allowing independent researchers access to the inner workings of its algorithmic system. Though, YouTube did not provide any details or dates regarding such a step.
— Inside Jeffrey Epstein’s Decades-Long Relationship With Leslie Wexner
— Trump’s Deranged Replacement Theory Might’ve Lost Him the Election
— Jeff Bezos and Elon Musk Want to Burn Their Cash in Space
— Three Texans Bust Myths About the Alamo’s Famous “Last Stand”
— The Guy Who Could Send Trump to Prison May Soon Cooperate With the Feds
— Bill and Melinda Gates’s Epic Divorce Saga Enters Its Next Phase
— Juneteenth, Critical Race Theory, and the Winding Road Toward Reckoning
— Trump Is Now Urging People Not to Vaccinate Their Kids Against COVID
— From the Archive: Microsoft’s Odd Couple, in the Words of Paul Allen
— Not a subscriber? Join Vanity Fair to receive full access to VF.com and the complete online archive now.