Home / artificial intelligence / Security expert warns of ‘unknown vulnerabilities’ exposed by ChatGPT 

Security expert warns of ‘unknown vulnerabilities’ exposed by ChatGPT 

A Chartered Security Professional (CsyP) is warning of the dangers of ChatGPT, and the vulnerabilities it might expose to an organisation’s security that have not yet been considered.  

He also warns against the dangers of impersonation, and how potentially sensitive data could find itself in the wrong hands and compromise personal and organisational security. 

Brendan McGarrity, Director of Evolution Risk & Design, and a Fellow of The Security Institute (FSyl), says that the impact of ChatGPT has not been thought through in terms of the security industry, and/or in making organisations less rather than more secure. 

He explains: “ChatGPT scrapes information from billions of questions and answers from the internet and ranks what words will come next in a sentence based on a probability to achieve a ‘reasonable continuation’ of whatever text it has got thus far.

“As one scientist put it, it keeps asking the internet over and over again ‘given the text so far, what should the next word be’. It might pick the highest-ranked word; but it may also pick a more random word which adds a layer of creativity.”  

But, McGarrity says, in scraping the internet, does that expose organisations to potential harm, and does it expose issues that have not yet been uncovered?  

He continued: “Can it find and highlight weaknesses in a client’s security profile. What checks and balances are there to protect what has previously been written, and prevent it from being presented as new? How do you lock your inner workings down? Is it possible that one party might be able to accurately impersonate another, based on the language they use? Could it be used, for example, to impersonate me?” 

Of course, McGarrity accepts that not using ChatGPT or embracing the AI revolution means running the risk of being left behind and falling behind the innovation curve. But he argues that what has been written and is searchable on the internet in the past, and what might be written and available in the future, could expose a vulnerability that has not yet been considered.  

McGarrity concluded: “It could uncover sensitive data and compromise personal and organisational security. ChatGPT is a potentially dangerous invention, and organisations need protecting from it.” 


About Sarah OBeirne

Leave a Reply

Your email address will not be published. Required fields are marked *