We hear a lot about the benefits of artificial intelligence. How it can help us to understand big data, protect us from malware, and free up humans from repetitive tasks to do more fulfilling work. But there’s a darker side to AI too. As with any technology that can make things easier for the good guys, the bad guys want a share of the action too.
AI technologies could be used to transform the world of cybercrime, aid social media manipulation and even lead to physical attacks. A report by a group of academics suggests that we should be taking the threat aspect of AI more seriously.
In the past, it’s been necessary for hackers and those engaged in cybercrime to have at least some level of technical skill. The impact of AI could be to lower the entry requirements for attackers, allowing them to carry out sophisticated cyber assaults using tools readily available on an as-a-service model from the dark web.
AI can also be used in profiling, taking data from social media and using it to identify targets for social engineering attacks, helping to maximise the chances that the user will click a link in a phishing email or download an infected attachment. We already know that hackers use corporate information and social media to help direct attacks  adding AI is likely to make these sources of data still more attractive.
The report points to the potential political manipulation too. Hackers and nation states will be able to use ultra-personalised disinformation campaigns to spread rumour and fake news ahead of elections. It could also be used to deny users accurate information by creating a flood of misleading stories, making facts harder to find.
There’s the potential to exploit AI’s increasing use in the physical world too. Self-driving vehicles and drones could be hijacked and ‘weaponised’ in order to create disruption or carry out terrorist activity.
Of course, the world is already full of technology hazards. We’ve suffered from malware and attempts to steal identity for many years. A lot of current efforts to guard against attacks are now seeing a focus on machine learning and artificial intelligence.
But of course, the other side is aware of these technologies too. We could be entering the start of an AI arms race in which each side is trying to gain a technical edge. It’s probable at the moment that the hype – on both sides – is outstripping the technology. On a positive note, it seems that AI will be of more use on the defensive side, where large volumes of data relating to attacks is available, allowing accurate conclusions to be drawn and aiding the development of systems that are harder to attack.
The pace of change is rapid and if companies want to exploit the opportunities AI offers to improve their systems and combat threats, they need to find the best people. The team at Clifford Associates can help to locate the digital experts you need.
Central London (WFH), to £300 p/d. Initial contract 3mth expected to roll.
Central London (WFH), £300 p/d. Initial 3-6mth contract. Expected to roll.
Central London (WFH), £70k
Central London (WFH), £80k + Bens
Central London (WFH), to £35k + Excellent Bens. (Freelance £250 p/d).
Central London, £45k + Excellent Bens - 6 mth FTC (expected to roll to Perm)