Data & AI

What does AI mean in the context of security?

New Zühlke research shines a light on the shifting security landscape and the changing role of the CISO. Here we unpick the key findings and share practical steps on how to defend against emerging AI threats and the careless use of AI systems. 

7 minutes to read
With insights from...

The secure AI adoption tug-of-war

If there’s one idiom that will resonate with CISOs everywhere, it’s probably: don’t run before you can walk.  

Security leads have long stressed the importance of laying the right groundwork for successful outcomes. And in today’s rapidly changing AI landscape, that ethos has never been more important.

In the race to innovate and compete (and, often, to please shareholders), businesses are tripping over themselves to bring AI into the heart of their tech stack. But this can be a dangerous game if it means bypassing adequate threat analysis processes.  

It’s a tug-of-war that looks set to stress-test frameworks and redefine roles across the information security industry. So what’s the safest way forward?  

To help find answers, we conducted qualitative research with 22 chief information security officers (CISOs) and security leaders across the DACH region – in complex and, in some cases, highly regulated sectors, such as healthcare, financial services, pharma, government, IT, and transport.

Here we explore what those findings can tell us about the evolving security landscape, secure AI best practices, and the changing role of the CISO.

Key findings from our interviews with CISOs in Europe:

95% agree businesses will embrace AI adoption, regardless of security

95% plan to educate all employees on AI security

71% say their role is changing from ‘security control’ to ‘risk decision facilitator’

45% expect their business to create dedicated AI security roles

44% say their firm is well prepared for the transition to AI

Understanding the risks of rapid AI adoption

Before we can explore solutions, we need to understand the risks. And, though AI implementation is far from a doom-and-gloom situation, it does come with a significant dose of pitfalls to navigate.

Pace is a huge problem here. 95% of the CISOs we interviewed agreed that businesses are likely to embrace AI regardless of security concerns, while only 28% believe the industry is prepared to overcome them.

Complicating matters is the fact that, while some of the industry’s biggest risks are brand new, others are age-old security worries given a new dimension thanks to AI.

Key AI-empowered threats include:

  • Prompt injection

    ‘Prompt injection’ happens when an AI system confuses data and instructions. This is especially dangerous with emerging tools that have access to users’ computers – as with the new ‘computer use’ ability debuted by Anthropic’s Claude.

    As an example, if you ask an AI system to summarise a PDF file, and the data in the document contains a malicious instruction, the AI could execute that instruction on a system level – with devastating consequences. 

  • Data poisoning

    Data poisoning is the term given to the act of purposefully tampering with AI training data, with a view to hamper its output. Whether it’s written, numerical, or image-based, feeding erroneous information into an AI model that leans on that data for its results will naturally lead to incorrect answers being presented as fact. 

    With AI models set to process and derive business intel from large amounts of data, data poisoning comes with the risk of executing faulty business decisions, resulting in significant damage, possible business bankruptcy, and even loss of life.

  • Model transparency and data collection

    The vast majority of businesses using AI rely on third-party models, rather than making their own. That’s a problem because these existing models tend to be technological ‘black boxes’ – with zero transparency as to their inner workings. Further to this is the issue that the companies providing these models need a near-endless stream of new data to help them improve.

    All AI organisations have a vested interest in making sure that they can gather new information – and new human-generated output. This is true even in the case of open-source models; if new (aka fresh) data doesn’t go in, hallucinations will begin to spiral.

    Businesses need to be sure that their private information isn’t being leaked out into the training data for publicly available AI tools.

  • Human error

    If the people within the business aren’t adequately educated on AI-centric risks, they’ll often become one of the weak links in the chain. Employees told to use a specific, locked-down tool, for example, may take it upon themselves to use another that they prefer – one that incorporates freely given personal or business information into its training corpus. 

    Humans can fall for more bespoke, targeted threats too. Prominent examples include intense spear phishing campaigns, which come with voice and video chats indistinguishable from authentic ones. 

A pragmatic and zero-trust stance

AI’s business impact will be transformative in more ways than one; the role of the security officer is going to need to change alongside the tools at hand.  

CISOs are cautious by necessity, but becoming the voice of reason against AI bullishness might increasingly put them in the role of the nay-saying bad guy.  

In our research, 71% of CISOs believed that their role is changing from being the owner of ‘security control’ to one of ‘risk decision facilitator’. Here, the game becomes one of saying yes to the lesser of several evils instead of having full, end-to-end control of the tech stack.

The responsibility of CISOs is to ask: ‘Is the technology mature enough that we can deploy it to our employees?’ And if not, they need to explain why. It's more about acting in the interest of the greater good than the competitive advantage your organisation stands to gain.

In many cases, that means assuming the worst.

'We’re moving to a zero-trust paradigm, where we place no trust in any of the systems or requests we receive. From there, we evaluate and validate thoroughly before execution’.

This stance must be based on a level of healthy skepticism, but also on a deep understanding of the technology itself. Security practitioners must have a fundamental understanding of what is and isn’t possible with AI.

This becomes especially important when considering that only 45% expect to create dedicated AI security roles, and just 32% plan to hire AI security experts.

""

Secure AI best practices

With multiple risks butting heads with the pressure to adopt AI rapidly, it’s no surprise that less than half of the CISOs we interviewed believe their organisation is well-prepared for the transition to AI. 

  • ‘My organisation is well prepared for the AI transition'.

    Only 44% of CISOs agree

    Only 44% of CISOs agree

So what can we do to hold back the tide of possible security breaches? As is usually the case, the solution often begins with people, not technology.

Let's start with something CISOs have been talking about for years: awareness and education. It sounds overly simple, but with new technology comes new risks, and these new risks must be explained.

Of course, education can be a hard sell. You don’t want to go in front of people and say: here’s another boring, two-hour education session, because people generally hate that. But for compliance and security, education is key. We need AI literacy across employees, society, the economy – across the board.  

'A good way to improve literacy in the AI era is to let people experiment in safe environments. Rather than have colleagues feed sensitive data to public models, they should be given access to well-shielded environments where they can gain their own hands-on experience'.

The second part of the puzzle is about getting processes and technologies under control. That means working on internal governance programmes and frameworks built to handle this new technology, as well as using controllable, open-source, and explainable AI models wherever possible.  

Our recommendations here are clearly outlined in our responsible AI framework, but in the main, we’d suggest that CISOs exercise caution, control, and patience above all else.  

The following eight steps provide a rock-solid starting point for your secure AI strategy: 

  1. Don't rush AI adoption; risks can be mitigated by improving existing procedures
  2. Build expertise through controlled experimentation and training
  3. Create employee awareness about AI risks and guidelines 
  4. Adapt internal governance processes 
  5. Participate in knowledge-sharing networks 
  6. Follow emerging security recommendations and standards 
  7. Stay informed about the ever-evolving tools and regulations 
  8. Maintain constructive skepticism, while remaining open to AI's benefits 

The good news is that there’s already evidence of this groundwork in action. In our study, 80% said they planned to have policies regarding AI security issues in place in the next 24 months, while 95% plan to educate their entire employee base on AI security.  

Secure AI: a game of caution, control, and patience

Ultimately, while it may be impossible to keep every AI risk under control at all times, it’s important to stay pragmatic in your approach. Combining the strengths of people and machines – and maintaining human oversight – is often the key to secure AI use.  

Zühlke is already helping organisations across industries walk this complex tightrope – providing a guiding hand in translating proofs-of-concept tools into scalable, secure solutions.

Explore the full findings from our CISO interviews