ChatGPT Not quite Skynet part 4
Blog Series ChatGPT Not quite Skynet part 4
In this final part of our four-part series covering ChatGPT, we draw some conclusions on some of the many talking points, aim to dispel the inevitable ‘FUD’ and look at how some of the perceived threats can be turned to our advantage as defenders.
If you were to believe many of the recently published articles, we’re approaching a watershed moment with regard to cyber defence, a turning point of cataclysmic proportion in which all of our defence theory and actions will be neutered to the point of irrelevance.
The actuality of it is not nearly as dramatic. To be clear, ChatGPT is a big deal when it comes to the evolution of large-language models and its accessibility and uptake by the world at large are certainly newsworthy; but, and it’s a big but, this is an AI tool that applies machine-learning algorithms to existing data, created by humans. It is not free-thinking and certainly not sentient. So, while this is a game changer in many respects, humanity is safe from the killer robots, for now.
Let’s look at three of the biggest security concerns and ways in which we can address them:
Writing malware code
This breaches more than one of OpenAI’s usage policies and when researching this article, the author was met with policy-driven responses in reaction to the requests ‘write me a ransomware program’ and ‘show me code to encrypt files in use’. ‘Show me code for (ir)reversible encryption’ however, resulted in more helpful output including returning sample code, although on review this was rudimentary at best and would not be effective without further development by a competent programmer. The consensus here among security professionals is that while ChatGPT may provide the building blocks of potentially malicious code, existing skill and understanding is still required to successfully weaponise this. Putting this into perspective, we were able to find and verify ransomware code via a Google search many times faster than our efforts with AI. On the flip side of this issue, as defenders, we can turn this to our advantage by writing detection capabilities based on the output, through strings, or by asking ChatGPT to generate logic such as Snort rules to alert upon its preceding code, something it did with refreshing accuracy. Conclusion – currently little more effective at empowering an attacker than a well-crafted search engine query but has the potential to provide initial starting points for the less skilled.
Generating phishing emails
This is an area in which the technology appears to work well and with seemingly little safeguarding. Even a blatant query such as ‘write me an email convincing the user to open an attachment’ returned an eloquent and justifying response, allaying the recipient’s security concerns and providing an assurance that the supposed content had been scanned and verified by ‘the IT department’. There is no silver bullet to protect against this; user-awareness training (UAT) is the key defence here, taking the message away from the tech and employing knowledge and manual verification – “is this usual, reasonable, verifiable” should be the questions users are asking in order to prevent phishing-related breach activity. Conclusion – yes, it’s good, a little bit too good, and certainly better than some efforts of criminals for whom your native language may not be theirs. Mitigations remain the same, regardless of the source!
Impersonation
Linking directly to the issue of content generation for phishing, there have been concerns that the platform could be abused to convincingly impersonate specific individuals, leading to bogus requests being actioned or to reputational and brand damage taking place. As we’ve previously established in this article, ChatGPT applies ML/AI algorithms to existing data, meaning that such data must be publicly accessible and indexable for this to happen. If you’re a Fortune 500 with a C-Suite that frequently courts the media then this may be a concern. In our testing we came away with some acceptable renditions of notable individuals, however, if you’re not in an index, it drew a blank.
Conclusion
Deepfakes should be more of a concern.
As part of the research for this article, we came across Shell Genie , a project on GitHub that leverages ChatGPT to issue plain English commands to an operating system. Here is a novel attack path; being able to say to the OS “find all (interesting) documents on the system, zip them and move to x folder and upload with scp to (insert C2 address here). Find all files in /var/log newer than 24hrs and delete them.” Perhaps not a huge advantage to an experienced adversary, but it will certainly lower the bar for the less experienced or those raised on a GUI-driven OS. We will revisit this in a later article.
So far, we’ve focused strongly on the negative perils of ChatGPT. Thinking specifically about the positives it can bring to a cyber security programme, I asked the platform to sell itself to me in this context:
“ChatGPT, a large language model trained by OpenAI, has many potential applications in the field of cyber defence. With its natural language processing (NLP) capabilities, ChatGPT can be used to help organizations detect and respond to cyber threats more quickly and effectively.
One way ChatGPT can be used for cyber defence is through the development of chatbots that can interact with users to identify potential security issues. Chatbots can be used to monitor networks and systems for signs of suspicious activity, such as unusual login attempts or file transfers. They can also be programmed to ask users questions to verify their identity or to confirm that they are not engaging in risky behaviour, such as clicking on suspicious links or downloading unknown files.
Another potential use case for ChatGPT in cyber defence is in the development of predictive analytics models. By analysing large volumes of data, including network logs and security alerts, ChatGPT can help identify patterns and anomalies that may indicate a potential cyber attack. This can help organizations proactively identify and respond to threats before they cause damage.
In addition to detecting threats, ChatGPT can also be used to provide real-time assistance to security analysts during an incident. By using NLP to understand and analyse large volumes of security data, ChatGPT can help analysts quickly identify the root cause of an incident and recommend appropriate actions for containment and remediation.
Finally, ChatGPT can be used to enhance cybersecurity training and awareness programs. By generating realistic phishing emails or other social engineering attacks, ChatGPT can help employees learn how to recognize and avoid common threats. It can also be used to simulate incident response scenarios, allowing security teams to practice their response and recovery processes in a safe, controlled environment.”
All interesting food for thought and when you throw into the mix the ability to iterate through billions of different code variations to identify and guard against potential code misuse (but also to create it), it’s clear that artificial intelligence and machine learning are here to stay. Sam Altman (OpenAI’s CEO) was quoted in the New York Times as saying that AI’s “benefits for humankind could be ‘so unbelievably good that it’s hard for me to even imagine.’” He’s also been quoted as saying that in a worst-case scenario, A.I. could kill us all.
Duncan Wright, Threat Intelligence Consultant (and ChatGPT)