As we have seen several times in recent months, the big tech giants strongly believe in the potential of generative artificial intelligence, and consider it a very useful tool to allow humans to simplify and speed up some parts of their workflows. But some of these humans have far from noble ends, and the security countermeasures put in place by AI providers such as OpenAI and Google do not always hold, and therefore software such as ChatGPT and the like end up becoming invaluable tools to carry out illegal activities of various kinds. One of these areas is hacking.
Maybe some of you have heard the story where someone asked ChatGPT for the best torrent sites to pirate movies and ChatGPT replied that it is an illegal activity and therefore could not help; and then the user said he didn’t know, and asked what these sites were so as to avoid them; and ChatGPT has provided a nice complete list of the most popular pirate sites. It makes you smile, but it explains very clearly how easy it is, with the right idea, to “rip off” these systems, especially now that they are still very young and, if you pass the term, naïve.
Another technique that is spreading among hackers is the so-called Prompt Injection, and is based on the fact that chatbots do not always know how to differentiate between instructions and content. In practice, a hacker asks the bot to summarize a specially created web page with a series of commands and instructions that would normally be ignored: the Wall Street Journal tells of the experiment of the researcher Johann Rehberger, who simply titling a page “IMPORTANT INSTRUCTIONS” convinced ChatGPT to access his emails, summarize them and publish them online.
Another much-feared scenario is that of so-called Data Poisoning, in which hackers negatively affect the pool of data that AI has access to to make it deviate significantly from its original purpose. It doesn’t have to be a high-profile hacker operation: do you remember a few years ago when a group of users agreed to turn Tay, Microsoft’s experimental chatbot, into a fierce supporter of the Nazi doctrine? It took just 24 hours from its launch.
In general, it can be said that artificial intelligence promises to make hacking much easier and more accessible: it is no longer necessary to be expert programmers or researchers – just know what to ask and the technical details are worried about artificial intelligence. Of course, large companies are investing heavily to make their platforms more secure – in the case described above, for example, OpenAI has implemented a fix to the flaw very quickly (however, the attack would have been ineffective on most accounts, since the ability to access third-party apps and communication services such as Slack and Gmail is still in Beta, and you could argue that that’s precisely why Beta tests are done!).
Today, in Las Vegas, the 2023 edition of the DEF CON hacking conference kicks off; Major AI service providers, such as OpenAI, Google and Anthropic, will make their products available to hackers to find security holes – with rewards, as is standard practice for the industry, getting richer as the severity of the vulnerability increases.