AI is the hot topic right now. Some people are looking at ways to make the most of it, getting ahead of the curve when it comes to their competitors and trying various software in beta mode. Others, however, are terrified of what it might mean for their business, assets or livelihood.
It’s true that the emergence of AI is impacting entire industries and transforming the way people live and work. There’s no doubt that there’s massive potential for efficiency and innovation, but there are also other sides to the coin – many of which we aren’t yet aware of. This is particularly true when it comes to cyber risks.
Wherever you stand on the topic, it pays to understand AI’s implications. In this blog, we touch on three emerging cyber risks from the rise of this technology.
Malware
There are countless AI tools, and whilst they will all try to implement measures to prevent them being used to devise malware, they will have varying levels of success. After all, malware vs cyber security has been an arms race for decades already, and the rise of AI is just a new escalation in the arms themselves.
Using AI tools, cyber criminals can bypass the current protections on the less secure AI tools. Using natural language processing (NLP), they can look to generate ever-more sophisticated malware. And, as the tools become more refined and the bar to entry is lowered, even those without basic programming skills will be able to create damaging malware.
Whilst this would not be expected from the more established NLP tools like ChatGPT, there will be softer targets for this purpose as a slew of AI software providers go to market.
Data manipulation and contamination
AI tools are only as good as the data they use, and that data is only as good as those who provide it. Therefore issues can arise when AI tools are compromised with bad data. This can be done innocently, but it can also be done with malice.
Cyber criminals may deliberately contaminate the data used to train AI tools to manipulate their decision-making process. This is sometimes referred to as data poisoning. By corrupting training data, AI models subsequently learn erroneous or biased information, which can then be exploited by malicious actors for nefarious purposes.
For example, the data used by AI tools could be manipulated to introduce well-hidden vulnerabilities. Those using the AI would then be opening themselves up to using AI-generated code that is insecure, leading to future cyber attacks.
Social engineering
This is possibly the most frightening area for individuals. There are already countless stories of someone’s grandparent losing their life savings to a fraudster, and the risk with AI is that these attacks will become more sophisticated, personal, and persuasive.
We’re not just talking about getting higher quality phishing emails, without the obvious spelling and grammatical errors that make them easier to identify. Instead, we can expect to see more and more deepfake attacks using snippets of a target’s voice to get sensitive information divulged.
Whether it’s a parent receiving a voicemail from a deepfake mimicking their child or an employee getting a message from what sounds like their manager, social engineering is a new and powerful way to persuade people to share information with, or pay money to, someone they shouldn’t.
Keep abreast of AI risks with RiskBox
Whilst AI is definitely here to stay and will likely turbocharge certain aspects of business for the better, it is vital to be aware of the risks and not get distracted by the benefits of the shiny new thing.
To learn about the options available to protect your organisation from cyber and data risks, contact a member of our team. We’d be happy to get to know your needs before recommending ways to help ensure business continuity should the worst happen.
Photo by Steve Johnson on Unsplash