Fake news is a relatively new term, but in just a few years it has had a major effect on society and democracy. While those in power have used propaganda and misinformation for centuries to gain support and quash opponents, the advent of social media has enabled people to create and spread information on a far greater scale, manipulating and influencing public opinion. Studies show that fake news spreads up to six times faster than true stories, with systematic campaigns deliberately designed to go viral.
Social media platforms are under pressure to crack down on fake news. Facebook and Twitter have attempted to combat the fake news flood with a large influx of content moderators, but fact-checking individual articles is time-consuming work. It’s widely predicted that technological resources will play an increasingly significant role in this area and there are a number of initiatives exploring how artificial intelligence and, in particular, natural language processing and machine learning, can reliably be deployed to identify and flag up fake news content.
How does it work?
Automated fake news detectors use natural language processing to deconstruct articles and zero in on specific linguistic patterns. The programs use an initial library of real and fake news to “learn” what constitutes unreliable information and can then apply that knowledge to distinguish fact from fiction when presented with new content.
So what sort of language patterns are these programs looking for? Fake news often uses sensationalist or exaggerated language with the intention of appealing to readers’ emotions, whereas traditional news articles tend to be more measured in their choice of words. Certain phrases may indicate political bias, which could increase the chances of the article being fake. There might be spelling or grammar errors which indicate the article was written with more interest in attention-grabbing than accuracy.
It’s not just about the linguistics, however. Machine learning can also look at the URL of the article: does it closely mimic an established site, or has the site published fake news before? Analysis of an article’s metadata is extremely helpful in classifying real and fake news: for example, how often are tweets being sent and where did they originate? An image’s metadata can be checked to compare its background information with how it is currently being used, therefore sniffing out genuine content that is published in a misleading or fake context.
There is plenty more research to do in this area, especially as fake news campaigns become cleverer with the use of language in an attempt to fight back against detection. Digital and marketing companies would definitely benefit from tools that can assess whether news is real or fake and while artificial intelligence can do the bulk of the heavy lifting, businesses will need specialists who can check results, analyse fake news trends and influence the direction of machine learning.
It’s a rapidly evolving field, and keeping up with today’s digital resourcing requirements is a real management challenge. Talk to the Clifford Associates team to see how our knowledge and experience can connect you with the people you need.
Central London, £45k + Excellent Ben's
Central London, £40k + Yr 1 OTE = £76k (uncapped)
Bracknell + Central London + Remote + International Travel., £120k + Yr 1 ote = £240k
New York, US., $90k (DOE) + OTE = Yr1 $150k
Central London, FT, £100K DOE + Ben's
Central London, £60k + £40k comm's = Yr 1 £100k