Despite the splash made by ChatGPT at the end of 2022, Artificial Intelligence and Machine Learning have been part of our daily lives for some time. We use smart home devices, chatbots, voice assistants, and Netflix recommendations with little thought as to what’s behind the scenes.
Whether you are excited by the hype or concerned by the scaremongering headlines reporting that this advanced technology will take our jobs, the AI train is rapidly gathering speed; ready for it or not. Businesses face increasing pressure to get on board, or risk falling behind their competition – leading to some rushing ahead without considering the AI cybersecurity risks.
As with any substantial technological advance, take the internet, for example, AI brings a vast array of business operational benefits. However, it also incurs significant, additional risks, as cybersecurity threats step up once more, evolving as fast as the new technology itself.
Now a leader in AI-based endpoint cybersecurity solutions, Blackberry recently published its 2023 survey of 1,500 IT decision-makers across North America, the UK, and Australia. Although respondents in all countries thought that overall ChatGPT would be used for ‘good’, 74% recognised the potential AI cybersecurity threats, voicing concern.
Is AI good or bad for society?
Generative AI tools use statistical techniques which enable the machines to learn over time. Drawing on historical activity to build behaviour profiles on users, assets and networks, AI recognises patterns, makes assumptions and performs autonomous processes without human intervention. The more data analysed, the cleverer it becomes.
AI has the ability to help solve some of the more complex challenges we face at speed. High on society’s agenda is our need to protect company data. Therefore, whilst AI increases the attack surface, presenting future data governance challenges, conversely, does it also provide significant opportunities to fight fire with fire?
Is AI a threat to our cybersecurity?
Increased data breach risk.
Employees ‘playing’ with this exciting new technology will have been a familiar scene in offices up and down the country. It didn’t take us long to realise that the more information we enter into ChatGPT, Google Bard and Jasper AI, the better results the chatbots returned.
Two recent reports present worrying findings about the information your team could be sharing with these applications – increasing the likelihood of an accidental insider attack.
CybSafe’s study found that 64% of US office workers had entered work information into a generative AI tool; a further 28% weren’t sure if they had.
Meanwhile, data security experts, Cyberhaven, found that 11% of all data that employees paste into ChatGPT was confidential corporate information, intellectual property or customer PII. 4% of respondents openly admitted entering sensitive data. *
Similarly, we often don’t think twice about sharing our data with a website chatbot, automatically trusting that our information will be handled responsibly and securely.
Malicious code.
A common concern is that large language models (LLMs), like ChatGPT, will upskill less experienced threat actors, improving and scaling their criminal activity. Negative reports discuss AI’s ability to write code. Whilst it is not yet perfect, which of course, it must be to work, it is improving. In the future, coding will not require human intervention.
Due to their highly complex algorithms, identification of AI-app security flaws is harder. AI itself is not yet advanced enough to fully understand the complexities of software development, making code vulnerable to attack. Hackers can change privileges, mark code as safe and develop mutating malware that alters its code upon execution to bypass endpoint detection systems.
51% of IT decision-makers believe there will be a successful cyber-attack
credited to ChatGPT within the year. ^
Phishing
Phishing remains the most common cyber-attack with nearly 1 billion emails exposed in a single year; a figure that’s forecast to rise. +
Threat actors are known to be misusing ChatGPT to improve social engineering sophistication; creating one of the biggest generative AI security risks.
Information is gathered at scale from social media, corporate websites and other public information sources to create detailed, highly convincing, phishing profiles and very believable malicious websites that are even harder to spot. Furthermore, as people can struggle to tell the difference between human and AI-created content, this builds trust and removes tell-tale language errors.
Deepfake AI.
Voice and facial recognition software often creates very realistic images, audio and video hoaxes. It has developed substantially; there’s a reason Hollywood has gone on strike. However, this extends far beyond Tom Cruise not actually appearing in the next Mission Impossible – deepfakes can be used to bypass stringent security protocols in the global financial industry.
Fears are growing regarding automatic speaker verification security systems. In early 2023, a journalist used AI to crack his own Lloyds Bank account. ** At a similar time, a Guardian Australia reporter used just 4 minutes of audio to clone their voice and access the Voiceprint system used by the Australian Tax and welfare offices.
Vishing social engineering scams can also clone a trusted authority’s voice to facilitate the sharing of financial information passwords.
How is AI used in cybersecurity?
According to Blackberry’s research, 82% of IT decision-makers plan to invest in AI-based cybersecurity in the next 2 years – almost half plan to do so in 2023.
This is driven by the vast quantity of enterprise devices generating data that needs to be scrutinised to mitigate cyber-attacks – a figure that will soon be impossible for humans to manually analyse.
By 2025, 41.6 billion connected devices are forecast to generate 79.4 zettabytes of data.
International Data Corporation.
As the technology progresses and the hackers up their game, it will become increasingly difficult to defend against AI cybercrime without employing AI itself to combat it.
Our current cybersecurity systems lose effectiveness against the advancing AI-security threats. A self-learning, responsive, automated framework will help resolve some of the biggest challenges in data governance. Recent advances in computational power and scalability mean vast quantities of data can be analysed.
Threat detection.
Fraudulent activity can be detected by identifying pattern irregularities with greater accuracy and efficiency. Negating the potential for human error or biases, this pattern recognition of machine learning algorithms also alerts CISOs to any emerging attack trends, so preventative action can be taken; effectively shrinking the attack surface.
Incident response.
All cybersecurity demands continuous monitoring to prevent any intrusion and detect potential data governance issues, but one of the leading benefits of AI cybersecurity is the real-time automated, proactive response to them. Should an incident occur, rapid responses will mitigate the impact.
Importantly, AI cybersecurity systems are also vulnerable to cyber-attacks themselves, including attempts to manipulate the protective algorithms that we use to train them.
Improving control effectiveness.
As it provides data-driven analysis, AI cybersecurity systems inform decisions and help security teams achieve buy-in or report to stakeholders, end-users and board members. It can monitor its own effectiveness. By identifying areas of the vulnerability management strategy that should be prioritised, and where a breach is most likely to occur, enterprises can maximise cybersecurity budgets and resources.
AI presents both cybersecurity opportunities and challenges, helping to both fight and
facilitate cybercrime, in a seemingly never-ending battle between cybersecurity professionals and the cybercriminals.
As AI technology continues to evolve at pace, so too will the threat landscape with the cybercriminals leveraging AI for highly sophisticated attacks. As we use generative AI technologies to maximise time efficiencies, boost performance and enhance business processes, it is vital to ensure all employees remain vigilant to prevent an attack and/or breach.
As the scale of our data soars, humans alone will no longer be capable of protecting such a dynamic attack surface. AI’s rapid data analysis, trend/threat detection and automatic response deployment will strengthen our organisation’s defences against attack. Artificial Intelligence and machine learning look set to be a game-changer for cybersecurity professionals and businesses alike, as we embrace and implement AI security.
*Zapier, ^Blackberry, +AAG-IT, **Schneier.
Your trusted data governance partner, tier1’s circular, environmentally friendly ITAD services, include supportive lifecycle management, upgrade and resale solutions that help you to maximise lifetime value and prevent data security issues.
As a Blancco Gold partner, our sustainable data erasure services guarantee data destruction and are approved by the UK’s National Cyber Security Centre. We are also accredited by the Government-based Cyber Essentials Plus scheme.
To find out more about our lifecycle management solutions or our secure data wiping services, contact our friendly team on 0161 777 1000 (Manchester), 01621 484380 (Maldon) or visit www.tier1.com
Resources.
The Guardian, Forbes, Zapier, Blackberry, Balbix, Schneier, Dell Technologies, CybSafe, KM Tech, Tech Target, Meta Compliance, Cybernetic, AAG-IT,