He said these people were not necessarily able to understand whether there was a producer of news in place, working according to certain ethics of journalism.
Docquir said there was a role for social media platforms to contribute to media plurality and diversity.
The European Charter of Fundamental Rights specifically mentions pluralism, he pointed out, and a diversity of viewpoints must be sufficiently visible on social media in order to have an informed citizenry.
Any supranational regulatory frameworks must be based on EU fundamental rights, Docquir said, because otherwise they could be used to catastrophic effect in less democratic regimes.
He added that tech platforms would not want to implement different regulatory regimes for each territory in which they operated, he told Tuesday’s event (2 March).
More facts
The problem of disinformation can’t be solved by ‘more facts’, given the existing information overload, technologist Mark Little warned those attending the webinar.
“Disinformation has an emotional power that punches through all that noise,” he said.
Trustworthy news is not the only response, since human fact-checking cannot scale to meet the challenge of disinformation.
Neither will automated responses work for disinformation, which is a complex, nuanced, linguistic, cultural phenomenon that is mutating on a moment-by-moment basis to evade detection, Mark Little said.
Automated responses to disinformation will have bias built into their datasets, and that is extremely dangerous, he said.
He added that, while some big-tech platforms took disinformation seriously, some simply saw it as a threat to their business models.
Little drew an analogy with food-labelling, to clearly indicate what can be found in the information we consume every day.
Because of linguistic variations, power must also be devolved to the individual languages and markets, he suggested.
Nazi undertones
Mark Little argued that the term ‘fake news’ has Nazi undertones in Germany, for instance: “Disinformation has an emotional power that punches through all that noise,” he said.
Trustworthy news is not the only response, since human fact-checking cannot scale to meet the challenge of disinformation.
Neither will automated responses work for disinformation, which is a complex, nuanced, linguistic, cultural phenomenon that is mutating on a moment-by-moment basis to evade detection, Mark Little said.
Automated responses to disinformation will have bias built into their datasets, and that is extremely dangerous, he said.
He added that, while some big-tech platforms took disinformation seriously, some simply saw it as a threat to their business models.
Little drew an analogy with food-labelling, to clearly indicate what can be found in the information we consume every day.
Because of linguistic variations, power must also be devolved to the individual languages and markets, he suggested.
The term ‘fake news’ has Nazi undertones in Germany, for instance.
Bot armies
“What kind of networks are creating ‘bot armies’? What kind of networks are manipulating paid advertising?” he asked.
Data exposure should show how pieces of content are being recommended, and what level of engagement there is with them.
Accountability and a content recommendation system is also required when it comes to speech, he said, offering a comparison with household goods going through a series of safety checks.
“We still don’t have common standards for the safety of the content recommendation systems that are so vital to discourse in our societies,” he said, adding that those tech firms that won’t co-operate should be exposed.
Solutions will not come from national governments, but rather in a focus on transparency in the way in which content is promoted.