First Evidence That Social Bots Play a Major Role in Spreading Fake Newsby Emerging Technology from
First Evidence That Social Bots Play a Major Role in Spreading Fake Newsby Emerging Technology from the arXiv August, 2017 Automated accounts are being programmed to spread fake news, according to the first systematic study of the way online misinformation spreadsFake news and the way it spreads on social media is emerging as one of the great threats to modern society. In recent times, fake news has been used to manipulate stock markets, make people choose dangerous health-care options, and manipulate elections, including last year’s presidential election in the U.S. Recommended for You A quantum experiment suggests there’s no such thing as objective reality The mass shooting in New Zealand shows how broken social media is IBM’s photo-scraping scandal shows what a weird bubble AI researchers live in No, scientists didn’t just “reverse time” with a quantum computer The collision of two distant galaxies was caught in this new Hubble image Clearly, there is an urgent need for a way to limit the diffusion of fake news. And that raises an important question: how does fake news spread in the first place?Today we get an answer of sorts thanks to the work of Chengcheng Shao and pals at Indiana University in Bloomington. For the first time, these guys have systematically studied how fake news spreads on Twitter and provide a unique window into this murky world. Their work suggests clear strategies for controlling this epidemic.At issue is the publication of news that is false or misleading. So widespread has this become that a number of independent fact-checking organizations have emerged to establish the veracity of online information. These include snopes.com, politifact.com, and factcheck.org.These sites list 122 websites that routinely publish fake news. These fake news sites include infowars.com, breitbart.com, politicususa.com, and theonion.com. “We did not exclude satire because many fake-news sources label their content as satirical, making the distinction problematic,” say Shao and co. […]Shad and co say bots play a particularly significant role in the spread of fake news soon after it is published. What’s more, these bots are programmed to direct their tweets at influential users. “Automated accounts are particularly active in the early spreading phases of viral claims, and tend to target influential users,” say Shao and co.That’s a clever strategy. Information is much more likely to become viral when it passes through highly connected nodes on a social network. So targeting these influential users is key. Humans can easily be fooled by automated accounts and can unwittingly seed the spread of fake news (some humans do this wittingly, of course).“These results suggest that curbing social bots may be an effective strategy for mitigating the spread of online misinformation,” say Shao and co.That’s an interesting conclusion, but just how it can be done isn’t clear.[Full article]Source: MIT Technology Review -- source link
#collective#behavior#analysis#visualization#information#particles#sociology