ChatGPT – how much of a threat is it?

We are nearly a year on from ChatGPT’s launch and since then we have heard plenty about AI – whether evangelising over its potential benefit or warning us against it’s perils.

We are guessing you have all heard of ChapGPT by now but, if not, here is an excellent introduction.   One of the interesting points within this article is the mention of testing done by developers who have found that ChatGPT has helped them to find and fix coding bugs.

Does this mean that ChatGPT raises the stakes for Cyber Security and threats to business?

Although the developers found it helpful to fix bugs, does this mean that it will create hacking code for would be cyber criminals?

ChatGPT’s undoubted strength is its’ ability to generate copy, and it will certainly provide a few challenges for academics wanting to prevent plagiarism.  It will also make it easier for cyber criminals to make more persuasive phishing emails (especially as many currently originate from non-English speaking countries).  Employee training will remain an essential layer of protection.

How much of a Threat?

A WatchGuard Seplicity Podcast at the beginning of 2023 which had a look at AI generated hacking threat and discussed cases where ChatCPT had been used and Marc and Corey saw the improvement in malicious use was likely to be exponential.  They noted that back in 2020 much of it was laughable but in just 3 years it has improved a lot.  It will however be met by AI on the side of good actors. AI maybe replacing human analysts in the future, but for now it hasn’t broken Turing’s test yet.

So now we approach the end of 2023 has anything changed?

Ian Simons, our Director, thinks that the danger of ChatGPT or similar AI tools has brought an increased cyber threat with it.  This is currently mostly from the misuse of the ChatGBT brand and the crafting of better quality spam emails.

It can also provide the starting blocks for malicious code, even write up malware text but at the moment it’s still quite complex according to an Gen Ditial Inc report, with code generated often likely to have bugs.  But the threat is beyond ChatGPT with AI generated deep fake, fake videos and audio which are improving all the time.   The report details efforts like WormGPT which are less ethical and more likely to become helpful to would be cybercriminals.

On the plus side, if you have an elderly relative that struggles with anything they could ask a Microsoft (who have invested $10Billion into AI) gadget how to do something in simple steps.  Or it could help the working with the potential transformation of Microsoft Office.