Twitter's user-driven fact checking system shows unexpected results in new study

Research reveals issues with Twitterʼs community-based fact checking system that started about 2 years ago. Most user-submitted political corrections dont reach the platforms wider audience

October 30 2024 , 05:19 PM  •  862 views

Twitter's user-driven fact checking system shows unexpected results in new study

Elon Musk changed Twitterʼs fact-checking methods around 2 years ago after getting rid of many content-checkers; he switched to a user-driven approach (which seemed like a bold-move at that time)

The new system called Community Notes lets regular users point-out wrong info: its supposed to be a crowd-powered way to keep facts straight. Musk really likes this setup and made a big statement about it

Community Notes is the best source of truth on the internet

Elon Muskʼs statement

But new info from research groups shows some not-so-great results: most fact-checks about political stuff dont make it to where people can see them. The Center for Countering Digital Hate did some deep-looking into this; The Washington Post checked things out too — both found the same problem

The way Community Notes works right now isnt doing much to stop wrong info from spreading: their research shows that a lot of good corrections just sit there never getting shown to anyone. This means that posts with not-true stuff keep floating around without any fixes (even when users try to correct them)