AI may be changing how we work but it also brings the urgent need for governance to ensure its use is compliant, fair and ethical. Summarising the key takeaways of a session on AI governance at the GRC #Risk Conference, Dr Kevin Macnish, Head of Ethics and Sustainability Consulting at Sopra Steria Next UK, reveals why it's vital for businesses to take action.
Imagine if your company's new AI tool discriminated against people with darker skin tones or leaked users' personal data. Scenarios like these are happening right now as you're reading this.
AI is quickly weaving its way into our home and work lives, making its governance more urgent than ever. We’ve known about AI’s ethical challenges for decades, but developments in the last five years pose real threats to our rights and wellbeing if not properly governed.
Some bodies, like the European Parliament and the State of New York, have introduced legislation to put guardrails around AI development and use. Others, like the UK government, are taking a more cautious approach. But whilst the authorities decide on the level of regulations, companies are adopting AI at pace and are not always adopting or maintaining governance at the same rate.
These issues were discussed at a recent panel at the GRC #Risk conference at London’s Excel Centre in October 2024. The conference brought together governance, risk and compliance experts from around the globe to discuss these and related issues.
The panel, which I chaired, included Teodora Pimpireva Tapping, global head of privacy at Bumble; Eleonor Duhs, head of data privacy, Bates Wells LLP and chief negotiator for the UK for GDPR; Ivan Djordjevic, principal architect for security, privacy, and identity at Salesforce; and Marc Rubbinaccio, head of compliance at Secureframe.
The panel covered three core areas: the current challenges, how to move beyond lists of principles, and the motivation to put robust governance in place, especially where there is no overarching legislation, as in the UK.
Current challenges
A core challenge that was raised repeatedly on the panel was the need for cross-functionality. AI governance isn’t just for lawyers or tech specialists, it’s like assembling a football team. You need everyone on board - lawyers, tech experts, ethicists, and more - working together towards the same goal.
At Sopra Steria, for example, the AI governance board consists of our chief technical officer, chief information and security officer, head of legal, head of procurement, data protection officer, and head of ethics consulting.
Governance is also challenged in some jurisdictions, such as the UK, since they have no overarching legislation. The UK currently has a patchwork of laws and regulations that collectively govern AI use (such as the Equalities Act, the UK GDPR act, and others) which makes compliance complex and uncertain, especially for small and medium businesses without the resources to have specialised AI governance oversight.
Principles vs practice
While principles are important as a starting point, they cannot be the last word on the matter. This will only create confusion when different principles clash and there is no clear guidance as to which should be traded off.
Think of a case where profitability may clash with explainability. It’s easy to say explainability should always come first, but in reality, businesses have to balance explainability against profitability and their risk tolerance, while remaining ethical and within the law. Should we stop using (and should OpenAI and Anthropic stop offering) tools such as ChatGPT and Claude because their output is not fully explainable?
Again, the need for cross-functionality was brought up as an essential prerequisite in order to move effectively from principles to policy to the implementation of standards. Which standards should be employed (ISO27001, ISO42001, the NIST Risk Management Framework and others) is another area for decision.
Motivation
While organisations may recognise the need for governance, they may not be able to justify the budget if there is no legislation demanding it. Even so, in those contexts, good governance can be a differentiator, and certifications such as ISO42001 will become increasingly valuable to help suppliers stand out in a crowded market. Good governance can also help organisations bring some order to the chaos many of us are experiencing with AI.
Lastly, we’ve all heard of the Universal Declaration of Human Rights. Even though some organisations may not be subject to, for instance, the fundamental rights requirements of the EU’s AI Act, the call to respect human rights such as non-discrimination, privacy and freedom of expression is universal.
Key takeaways
To wrap up, the panellists left us with some key takeaways: audit your AI systems so you know where they’re being used, don’t get swept up in the hype of new tech, make sure everyone knows their responsibility for the models across your organisation, and hold your suppliers to account for how they are implementing AI governance.
Conclusion
For all of the excitement and pace of development in AI, there are some core risk management principles that should underline your implementation. Know what your organisation has and is using; review what is coming into your organisation (and what is going out); and ensure that good governance sits within the organisational culture and does not reside in one function alone. Given the urgency around governance, if no one’s taking clear responsibility for AI in your organisation, maybe it’s time to ask yourself: what’s your role in making sure AI is compliant, fair, and ethical in your workplace?