Q. AI is not all that new. By the broadest definition, there has been some form of it operating since 1951, so why is it dominating headlines now?
Current references to AI are largely referring to generative AI (Gen AI) or Large Language Models (LLM). These are significantly different to previous forms of AI because of how they interact with us. Gen AI and LLMs are designed to learn patterns and generate content autonomously. They exhibit a remarkable capacity for creative tasks unlike earlier forms that were often rule-based or task-specific. Their ability to produce human-like content has catapulted AI into the forefront of public discourse, shaping discussions around its potential and impact on society.
Q. What’s going to happen? Should we be scared?
This is not our first technological revolution. I have observed that humans typically seem to move through four stages of behaviour as our relationship with a given technology matures.
Stage 1: ‘Disruption and Hype’
I would say this is where we are currently sat with regards to AI. Typically, what we see here is a great deal of enthusiasm, anticipation and fear surrounding the technology. We do not fully understand it, however, the promise of what it can deliver is extremely seductive. We see a surge of interest and media coverage in the technology and a mix of fascination and apprehension as people grapple with the unknown, contemplating the potential impact on jobs, privacy, and societal structures.
Stage 2: ‘Fear and Backlash’
The second stage is unavoidable because humans have a cognitive bias toward negativity and are wired to place more emphasis on negative information than we are positive information. However, in the case of Gen AI/LLMs, in the end, we will find this technology is too powerful not to use. LLMs can help us to interrogate data quickly and efficiently and there will always be a force pushing us toward progress.
Stage 3: ‘Adaptation and Normalisation’
This is where the technology is no longer seen as disruptive or novel and its use will become a normal and ordinary part of everyday life. Much of the reasonable concern about AI is its potential to obscure where human inputs end and where AI begins. Today, users have very little understanding of how these LLMs are trained and what safeguards have been coded into the models. If you ask GPT-4 today, it will tell you it has a layer of protection which reduced the likelihood of generating “harmful” responses. GPT-4’s answer is quite vague and is unlikely to withstand the least amount of scrutiny. However, by the time we reach this stage we will have acceptable answers to the questions raised.
Stage 4: ‘Transformation and Unforeseen Consequences.’
Here we see the profound impact the technology has made on society, culture, and individual lives as well as the effects—both positive and negative—that were not initially anticipated. If we take the internet as a recent example, today it plays an irreplaceable role in everything we do and in all areas of life. However, one issue this has created is that we are dealing with the previously unforeseen consequences of how internet exposure impacts the social development of children.
The advancements we have seen in AI in recent years are unprecedented. AI is both an amplifier and an accelerator. It allows us to do everything we currently do, except faster and with much more volume. If you consider the Nelson Data, Information, Knowledge, Wisdom (DIKW) Model, which shows data at the bottom of the hierarchy and wisdom at the top, it can help you to visualise what AI does. AI can collect data and turn it into information. The responses LLMs provide can simulate knowledge by learning from the data used to train it. Wisdom seems like a significant barrier for AI though. I think we can train it to consider multiple perspectives, however, we will also need to protect AI against bias. These protections are likely to limit how quickly AI evolve.
So, what does the advancement of AI mean for cybersecurity?
In cybersecurity, it has long been said that attackers have the advantage over defenders as defenders need to protect against every move made by the adversary whilst the adversary only needs to find one successful exploit.
I expect AI will significantly widen this gap in the short term, with cybercriminals leveraging the technology to develop more sophisticated attacks. For example, attackers now have the ability to create increasingly convincing fake audio, video and images, which will be used for more sophisticated, large-scale, phishing campaigns.
On the flip side, AI is expected to offer huge accelerations in the capability of threat detection, facilitating the analysis of vast quantities of datasets and behavioural activities in real time to detect potential cyber-attacks with unprecedented speed and accuracy. Similarly, it will also enable threat intelligence to be automatically collated, enabling organisations to stay informed about emerging threats and vulnerabilities more readily.
AI will also lead to the development of more advanced authentication methods, reducing the risk of criminals gaining unauthorised access to digital systems and applications, safeguarding critical data and sensitive information against potential security breaches.
Perhaps best regarded as a double-edged sword, AI is set to provide advantages and challenges to cybersecurity practices. This uncertainty demands we take a cautious and proactive approach if we are to stay ahead of emerging threats.