As artificial intelligence becomes a pressing reality – and priority – for businesses, the debates on its regulation are also intensifying. Key questions need to be answered, particularly around data management and liability for AI and its outputs. But are businesses that use it paying sufficient attention to the legal risks and implications?
At a recent conference by AI fundamentals, 56% of businesses said that they were already using AI tools. Of course, even a smartphone is an AI tool, so the vast majority of people now use the technology, even if they don’t realise it. There is little doubt that AI adoption in businesses is growing fast, and automation is now an everyday occurrence especially as businesses of all sizes and across all industries seek to push forwards with their digital transformation strategies.
Research has shown that AI in business is most commonly used behind the scenes for IT systems and R&D. In front of house contexts, it is used for customer services above all else. This can be via functions such as chatbots on a website, for example. Another key area is marketing, PR and advertising, where customer profiling is used to personalise and tailor offers. Management of fleet, facilities and operations also falls within the primary uses.
At the other end of the scale, AI is least used in the legal world and for legal tasks. For example, there are already plenty of AI tools that can carry out document review, but most businesses still choose not to use them even though the latest software packages are highly sophisticated.
Should businesses be worried about using AI where legal risk can be generated? Are businesses aware of the extent and scope of these risks?
In some areas, businesses are now alert to the risks, such as bias risk which can result in discriminatory hiring or managerial outcomes. But many don’t yet fully understand the privacy risks that can emerge from automated systems. this is even when the EU can – and does – levy heavy fines for situations where data compliance isn’t being delivered. Research also suggests that many businesses are not checking the vulnerability of their systems for attacks. Some are checking risks for unexpected outcomes or for predicted outcomes, but does this go far enough?
When it comes to governance, again, there is a concern that businesses are viewing AI governance as something to add into their strategies – rather than a core deliverable. In a survey, 25% of respondents said that they didn’t expect to have a data governance policy in place until next year, and a further 33% said it would take three years.
It’s worth remembering that the concept of data governance goes far beyond data privacy and compliance. AI systems require data to work and their outputs are defined by the quality and scope of that data. Good data governance means assuring its provenance, definition and lineage. Then, businesses can be transparent as to how their tools function and identify where the system needs to be strengthened. Because businesses don’t always appreciate just how vital good quality, well-managed data really is to their business, they otherwise put themselves at risk.
But what about the legal element? It’s true to say that emerging technologies are not yet comprehensively governed by existing laws. But this is either an opportunity or a risk depending on how you view the situation. If a business takes a relaxed approach now, however, it can be extremely difficult – not to mention costly – to retrofit that compliance. In many cases, it may not be possible at all.
A far better approach is to build in considerations of law and good practice when the system is being designed. When legal requirements are unclear, perhaps in relation to discrimination, privacy or data itself, then best practices can be used. For businesses now, the imperative is to think about these issues and plan accordingly.
London/ WFH , To £45k + Benefits.
Central London (WFH), To £60k DOE
London/ WFH , To £90K + Benefits.
London/ WFH, £60k + Benefits.
London/ WFH, to £90k
WFH/ London office. , to £80k