5 minutes to read With insights from... Dr. sc. ETH David M. Sommer Lead Security Consultant david.sommer@zuehlke.com Jerry Napitupulu Principal Security Consultant jerry.napitupulu@zuehlke.com Vlad Flamind Lead Data Consultant vlad.flamind@zuhlke.com No risk, no reward. At least, that’s how the saying goes. But when it comes to AI, does the former really outweigh the latter?As businesses prepare to hand over more and more of their information and processes to an emergent set of technologies and tools, CISOs everywhere are getting ready for the potential fallout.But are AI risk management concerns just a lot of hand-wringing over nothing? Or might unchecked models run amock with hard-earned business reputations? At Zühlke, we’ve been working with an interdisciplinary group of data scientists, security experts, business consultants, financial experts, and our own clients to find out. Here’s what we’ve learned about the future of AI risk management and governance – and how a mix of typically separate domains is about to muddy the waters... AI risk in business: a many-headed beast Let’s take a little step back. What do we actually mean when we talk about AI risk? Because, in a business setting, it’s more complicated than you might think. Categorising enterprise-level risk is typically a case of separating weaknesses into a few distinct buckets: security, operational, or financial threats. But AI is a different beast; its use bridges those divides. It crosses over from the technical world to the wider socioeconomic one. And throws ethical quandaries into the mix too.All of us are interacting with AI in some way, shape, or form. We all have accounts with various tools. And we all have varying confidence in our ability to do so safely – in and outside a business setting. That makes it tough to know whose responsibility AI risk management is. Is the real issue that people will unwittingly hand over company information as training data for loose-lipped AI models? Or that low-quality output will cause reputational damage?The impact of these different risk types can vary greatly too. Consider the legal implications and reputational loss incurred by a chatbot that misleads customers, the property damage and health and safety implications of an autopilot drone crashing into a building, or the potential financial and operational losses of a faulty forecasting model.Different stakeholders will have varying perspectives on where the biggest dangers lie, and that kind of ‘design by committee’ thinking can leave businesses at a standstill. But, to some extent, time is a flat circle. There are no truly new threats – just amplified ones. Data loss has always been something to squash. Security breaches and poor-quality input or output, ditto – even if the context is now colleagues using GenAI tooling in their daily work. What’s changed, then, is how we need to react to them in a world where the technology is shifting so rapidly under people’s feet. And that requires a balanced approach. Balancing AI risk mitigation and innovation Inadequate risk controls are one of the main reasons why many companies are stuck in AI ‘pilot purgatory’ and fail to scale their AI proof-of-concepts (POCs).Managing these risks effectively is therefore essential to ensuring return on AI investments. But there’s no blanket approach or easy answers. AI innovation and risk management have to meet somewhere in the middle. This is a largely unsolved problem. POCs that use in-house technology are struggling to make it beyond the pilot stage due to safety restrictions. Meanwhile, tools that use off-the-shelf models – with contractual safeguards in place – are more limited in terms of their ability to provide genuinely bespoke innovation.Worse still, it’s a tough fall from this tightrope. Our own research into secure AI suggests that the business impact of failing to mitigate AI risks can be significant. And other research suggests that revenue loss and customer loss are the largest negative outcomes of risks such as AI bias.The business impact of these risks is very real. At the same, if you over-mitigate, you may never leave the proof-of-concept phase. So what can be done? Mitigate AI risks with cross-lifecycle governance Designing a clear framework for in-house AI governance is key for mitigating AI risks while fostering innovation. To do this effectively, AI governance needs to address how the entire lifecycle of an AI solution should be managed – from the initial idea to deployment to ongoing monitoring. This long-term view is what maturity in governance terms is all about. And it’s where many businesses stumble. You need to move beyond ‘is this tool safe for us to deploy?’ towards the regular iterative measurement of every aspect of its risk profile.So what does AI governance best practice look like? Engaging all relevant stakeholders, your chief data or analytics officer needs to adopt an approach that includes:Strategic portfolio managementOngoing value monitoring; andproactive consideration of ethical and legal requirements (the EU AI Act, for example). It’s key is to be robust and inclusive in your risk scoring. Use case, operational, and strategic concerns all need to be considered – not as competing interests, but as parts of a whole. visualisation: different risks pose different challenges. Here AI risks are categorised into strategic, operational, and use-case-based considerations. With an approach that checks every relevant box, each risk framework will look very different – shaped largely by the makeup of the organisation and the proposed tools. Credible AI risk management, then, is holistic. It’s fit for purpose and involves people from across the business. How to create your AI governance framework We recommend the following four-stage process to develop an effective framework that ensures AI governance at every step of an AI product’s lifecycle: 1. Scope risk 1. Scope risk Define organisational risk appetiteDetermine the scope of the AI systemIdentify key stakeholders 2. Assess risk 2. Assess risk Evaluate potential strategic threats to the organisation's AI goalsMap workflows and pinpoint potential risk areasConduct risk assessments tailored to each use case 3. Mitigate risk 3. Mitigate risk Develop strategic risk mitigation plansImplement process improvements and controlsEstablish monitoring and evaluation mechanisms for each use case 4. Implementation 4. Implementation Integrate mitigation plans into strategic planning processesMonitor performance and risk indicatorsContinuously improve operational risk mitigation strategies Do this right – with ample care and attention at every step – and you’ll be as disruptive as any AI hype. That’s because businesses that ably framework their AI governance and risk mitigation strategies save money, are quicker to realise value from their tools, and avoid reputational fallout. Escape AI pilot purgatory with Zühlke Gain resilience and safeguard your organisation with an intelligent security approach and ingrained, life-long systems. Our team develops AI risk mitigation programmes tailored to your business, with processes designed for your unique needs. Speak with us today to learn more. You might also like... What does AI mean in the context of security? AI data quality: lay the data foundations for AI success AI & data engineering consulting