ChatGPT, Bard, DALL-E and GitHub Copilot. The development within artificial intelligence (AI) is moving at a rapid pace. While the benefits are many, AI presents never-before-experienced security challenges. The question is: can the safety mechanisms designed to protect us keep up with the development of malicious use of AI?

Artificial intelligence, or “AI”, is an emerging technological force that is reshaping business and society. We are just starting to see the transformative effects of this technology in delivering value to industries, organisations and individuals. AI is not a futuristic technological concept.As we’ve already seen thanks to tools like ChatGPT, Bard, DALL-E and GitHub Copilot, AI is already well-integrated into our daily lives and plays an important role across a range of industries.

At the same time, we are still at the very beginning when it comes to secure implementation and use of AI, in terms of cyber-security as well as from a wider security and safety perspective.

We have already seen several examples of how well-intentioned applications of predictive AI can have serious, unforeseen, and negative consequences.

As with any new technology, AI comes with security challenges

The breakthroughs and public availability of generative AI for text, images, voice and video, have brought AI to the attention of a much larger audience. However, most of us have in fact interact-ed with AI functions for years, as machine learning and AI models for categorisation and prediction are already widely applied. AI is found in services such as personalised viewing recommendations on Netflix, in voice assistants, or in customer service chatbots.

We have already seen several examples of how well-intentioned applications of predictive AI can have serious, unforeseen, and negative consequences. These include reinforcing bias, propagating discrimination, and in subtle ways embedding undesirable values and preferences from the datasets used to train the AI. As generative AI continues to take centre stage, we are also witnessing how careless applications can lead to misinformation being accepted as fact, even in legal settings. More worryingly, we have also observed that threat actors, driven by both criminal and more sinister motives, are exploiting this technology to further their malicious intents.The secure use of text-generative AI, even in a common business function use-case like a chatbot,has already proven its potential for unintended consequences. This was recently experienced by a Canadian airline whose chatbot had given an excessively favourable offer to a customer. The airline argued that the chatbot was a separate legal person or entity for which the airline was not accountable nor legally liable, a sentiment to which the Canadian small claims court did not agree.

With the fast development of AI-tools, it’s sometimes hard to separate reality from fiction. During the summer of 2024 AI-generated images gave a false impression of Norwegian scenery. For the record, the top image is AI-generated, whereas the bottom image is a photograph.
With the fast development of AI-tools, it’s sometimes hard to separate reality from fiction. During the summer of 2024 AI-generated images gave a false impression of Norwegian scenery. For the record, the top image is AI-generated, whereas the bottom image is a photograph.

Just as we expect AI to be pervasively transformative for businesses and society, it will likely prove to become just as available to threat actors.

Researchers are proving that the securing of AI models and its applications, from a technical security standpoint, is still in its infancy. They have amongst others, documented how “prompt injection,” or innovative and creative feeding of text into a text generating AI system, has led to unexpected and troubling out- comes. Although these AI systems are built to handle random inputs,some tests have managed to compromise the underlying computer system on which the models operate.

AI-driven assistants are entering our workplace at full speed, and with a varying degree of control. Businesses are now starting to realise that the AI input and output open a whole new realm of data security. This includes the information returned back to the users from the AI assistant,and in equal parts, all the data that is shared by the employees as part of AI processing and training. The latter risk is less obvious for many. It all needs to be secured.

There is little doubt that all manners of AI technology will be applied to gain potentially dramatic improvements in efficiency across all kinds of business processes and systems. It will also enable us to cope with – and thus adopt – processes and systems of growing complexity. When coupling AI decision making with automation, however, the risks need to be properly under- stood.Having a human in the loop for high-risk decisions may often be required.

It’s not only "AI for good"

The newfound availability of free or very affordable generative AI services for text, images, audio and video has found a relatively immediate application with cybercriminals. They use it either as a directly applied tool in perpetration of fraud or in support of social engineering in more elaborate cybercriminal endeavours. Being able to write in a foreign language with perfect grammar, emulate a company’s or demographic’s tone of voice, alter your voice to impersonate another person, or even alter your live video appearance to do the same, are all obviously useful capabilities for criminal application.

With all of this is now being applied for criminal use, severe losses are being incurred as a result, for both individuals and businesses. A recent example is the attack on an employee of a Hong Kong-based business. He was initially sceptical, and reluctant to accommodate a suspicious request to conduct money transfers. He was then invited to a live video conference with what appeared to be several known company executives, all of which were in fact so-called “deepfake” altered video streams. He was thus convinced that the assignment to transfer the funds was genuine. He proceeded to transfer approximately 20 million US dollars to the criminals.

However, while it garnered much attention, this is far from the only malicious use-case for AI technologies. Just as we expect AI to be pervasively transformative for businesses and society, it will likely prove to become just as available to threat actors. Other malicious applications include efficient, automated, and scaled exploration of vulnerabilities and execution of attacks, evasion of behavioural detection, the improved and automated ‘mutation’ (polymorphism) of malware to escape detection, and attacks on machine learning systems themselves – all of which are actively being developed and tested by threat actors right now.

PHOTO: ISTOCK.COM / GORODENKOFF
PHOTO: ISTOCK.COM / GORODENKOFF

Fighting AI back – with AI

Thankfully, however, AI technologies are also being applied to the defensive side of security.

In the past, many organisations attempting to introduce so called Data Loss Prevention tools were overwhelmed by the effort required to fine tune and maintain them. So, they would either abandon their efforts or be forced to apply the tools to a relatively narrow set of easily identifiable sensitive data types.

AI is now re-invigorating this category of security capabilities,fuelled by its potential to perform meaningful analysis of highly complex sets of unstructured data.The application of AI in phishing and social engineering is also not the exclusive domain of threat actors. AI technology is also being applied in the security capabilities intended to identify such texts, and to identify AI-generated texts in general. Thus,something as mundane as email filtering is now essentially an AI-versus-AI fight.

In cyber-security monitoring and detection, when venturing beyond the detection of what is known to be malicious, anomaly detection is applied to identify events of interest, worthy of further investigation and potentially symptomatic of a security incident.

This is traditionally a challenging discipline, involving constant improvement and the fine tuning of rulesets and thresholds to determine when an event is anomalous enough to warrant further investigation by human analysts. While simple automation can help streamline certain process steps, such as data enrichment prior to exposure to human analysts, the introduction of AI into anomaly detection holds potential to significantly improve efficiency. AI can better distinguish between genuine threats and false positives, thus radically improving the “signal to-noise ratio”. Similarly, we are now seeing AI-driven products come to market which address the currently labourintensive normalisation of data from heterogenous sources, to make it suitable for detection and querying.

Transforming threat intelligence into effective detection is currently a relatively labourintensive discipline for cyber-security organisations. This entails moving from the comparatively simple generation of atomic indicators of compromise to the detection of more complex event patterns on specific technologies.

What is signal-to-noise ratio (SNR)?

SNR measures the proportion of relevant, actionable alerts (the “signal”) to irrelevant or false alerts(the “noise”) in a system. It indicates the clarity and effectiveness of a detection system in identifying real threats. A high SNR means the system is good at distinguishing real threats from non-threatening anomalies, leading to fewer false positives. This allows security teams to focus on genuine security incidents without being overwhelmed by irrelevant alerts. Conversely, a low SNR results in many false positives, making it harder to identify and address actual threats.

The coupling of multiple AI capabilities is poised to yield radical efficiency gains in this area, enabling more rapid development and widespread deployment of effective detection methods.This area of improvement extends to the generation of preventive measures, such as block rules and policies. Where false positive rates prove negligible, and when operational risk permits, we will likely see a gradually increasing level of automation when it comes to detection, fully removing the human from the loop.In addition, Security Operations Centres (SOC) and Incident Response teams stand to gain significantly from AI technologies.

Many tasks typically handled by the so-called “first line” SOC analysts can be automated or greatly aided by AI. The initial analysis and damage assessment can be supported in a manner that reduces the need for heavy expertise and thus lowering the skill level for filling such a role. This, in turn, helps address the global“cyber-security skills gap”.

It is also worth noting how AI-driven large language models (LLM)can enhance efficiency in SOC and Incident Response teams. In the short term, we will see benefits such as the interaction between tools, data and knowledge bases, as well as the support the AI-driven language models can give by writing timelines, reports,presentations and other communications.

It’s not only the criminals trying to deceive us – we’re also trying to trick the fraudsters. Deception technologies have long been applied in cyber-security. This includes the creation of fake assets,such as files, accounts, computers or entire networks, that appear attractive, valuable and/or vulnerable but are not actually part of a real, operational system. This serves several purposes, most notably the high-fidelity signals from monitoring such assets, the ability to study the attacker’s tactics, techniques, and procedures(TTPs) , and the diversion of the attacker, leading said attacker to focus on the fake assets rather than the real one. However, attackers using AI for adversarial attacks can gain sufficient information over time about defence systems and identify honeypot networks,thus learning to avoid them in subsequent attacks. The introduction of AI-based dynamic honeypot assets (a tempting target that appears legitimate), aims to counter this through automatically reconfigurable traps that adjust and handle attacks based on the attacker’s behaviour.

AI technology plays both offense and defence

It is evident that AI technology and its uses bring a multitude of new risks and security considerations. Within cyber-security, we see that AI drives yet another escalation, improving attacks and simultaneously advancing defences to keep up. Some argue that for once, the benefits to cyber-security defence actually exceed those offered to our adversaries - but we’re still in early days.

As security professionals, we naturally tend to be somewhat dystopian in our thinking. When it comes to the evolution and exploding application of AI, this mindset is certainly not helped by the warnings voiced by several prominent technologists and AI experts that AI is an existential threat to humanity. Seeing how AI can also be applied in a security context, though, leaves one with hope.It’s all about understanding the risks and applying corresponding measures. From the safety and security measures of the AI systems and applications, to how they are applied in wider business processes, to the new security capabilities required in the business as a result, to putting a human in the loop, or even curbing our appetite for efficiency and automation, this is everyone’s job and a critical leadership responsibility.

A human’s illustration of text-to-image diffusion. The illustration is part of the project ”Visualising AI” by Google DeepMind, where artists were invited to create illustrations of different aspects of AI. This illustration was created by Linus Zoll.
A human’s illustration of text-to-image diffusion. The illustration is part of the project ”Visualising AI” by Google DeepMind, where artists were invited to create illustrations of different aspects of AI. This illustration was created by Linus Zoll.