AI & Compliance: Opportunity or Challenge?
Artificial intelligence is on the rise. How can compliance officers benefit from recent advances in the technology and what do they need to keep in mind?
Since ChatGPT took the internet by storm, artificial intelligence has been firmly planted into our everyday lives. The intelligent bot has amazed us with abilities that would have been considered science fiction just a short time ago. Whether it answers questions, writes poems, programs software or solves exam questions, it sometimes performs even better than a human. Long before ChatGPT, we were already using numerous other AI applications, most of the time without even realising it. Often, they run in the background. For example, this occurs when Amazon suggests similar products, Spotify recommends music or Siri talks to us.
What is artificial intelligence?
Artificial intelligence is one of the most important technological trends of our time and it will lead to widespread change. For the compliance sector, it will bring numerous advantages, though also challenges. AI is the ability of an IT system to mimic human intelligence. An AI learns continuously and can make decisions independently. Techniques such as machine learning and neural networks are utilised with the latter referring to a type of algorithm modelled on the way the human brain works. The AI applications we know today fall into the so-called “weak” AI category. This is not because they are bad but rather because they were developed specifically for certain tasks such as speech recognition, image recognition or playing chess. Strong AI, on the other hand, should be able to solve any kind of complex problem on its own. While it should actually think and act like a human, we are still a long way from that point. The highest level of development would ultimately be an artificial superintelligence that even surpasses human capabilities.
How AI supports compliance
AI comes into its own whenever large volumes of data need to be processed and analysed. What would otherwise take hours and days, the intelligent algorithms do in seconds. As a result, they can optimise processes and relieve the burden on human employees. This also brings valuable benefits to the compliance area and here are some examples.
- AI systems can help uncover potential compliance violations and irregularities faster. The algorithms are able to identify known patterns and learn new ones on their own. Many banks are already using this functionality for fraud detection.
- AI can assist in compliance risk analysis and assessment by collecting, evaluating and correlating data from many internal and external sources.
- AI can filter compliance documents such as legislation and standards, conduct an analysis, prioritise the most relevant information and save compliance managers a lot of reading time.
- An AI-powered chatbot makes it easier to communicate compliance policies within the company in an understandable way. It can answer employees’ questions and is constantly available.
New compliance risks through AI
For all its benefits, AI can also become a compliance risk in itself. For example, what if a chatbot used in recruiting discriminates against applicants based on skin colour or gender? In fact, the decisions an AI makes always depends on the data it is trained with. If this is not diverse enough or reflects societal biases, this transfers to the AI. For example, a recruiting system may predominantly suggest white males for a leadership position because it has learned from historical data that such applicants were preferred in the past.
Blindly relying on AI is dangerous, especially when decisions can no longer be traced. This is why we also speak of the black box effect. Since an AI learns independently, it changes continuously. If it is fed bad data or even manipulated by hackers, it can make wrong decisions and malfunction. That’s why it’s important to check the AI regularly.
Data protection presents another major challenge. This is because AI systems process huge swathes of data, which can also contain personal information. Compliance officers must ensure that the GDPR is adhered to and there are still many unanswered questions in this regard. For example, data that a user enters into an application is often used for the further development of the AI, whereby it flows into and merges with the AI model. What is the legal basis for this process? And how can data subjects still exercise their right to delete the data?
The EU AI Act is coming
EU lawmakers are also addressing the risks AI can pose. On December 9th, 2023, they reached a provisional agreement on the AI Act which is intended to regulate the development and use of artificial intelligence. The final contents of the law remains unclear and it is expected to come into force by 2025 at the earliest. However, it is known that the legislation distinguishes four risk levels from minimal to unacceptable. Applications that fall into the latter are banned while the others are subject to requirements tailored to their risk category. Non-compliance can result in fines of up to 30 million euros or six percent of total global sales. The AI Act affects nearly all companies and it is likely to make waves similar to the GDPR as a result. Compliance officers will therefore have even more work to do.
Conclusion
While artificial intelligence will become more and more prevalent, it is not yet possible to predict where it will all lead. For the compliance sector, the technology is both a challenge and opportunity. On one hand, it helps to master growing requirements. On the other hand, however, it brings new risks. Compliance managers can no longer avoid dealing with artificial intelligence and with the impending EU AI act, the topic will become mandatory.
How to effectively create, implement and communicate compliance policies and measure the success of your policy program – for everyone who is responsible for Compliance policies in their organization