ChatGPT has become everyone’s new favourite toy on the internet since its release in November. The AI-powered natural language tool quickly gathered more than a million users, who used the internet chatbot to create everything from academic essays and hip-hop lyrics to computer code and wedding speeches.

Not only have ChatGPT’s human-like abilities taken the internet by storm, but they have also put a variety of industries on edge: a New York school banned ChatGPT out of concern that it could be used to cheat; copywriters are already being replaced; and according to reports, Google is so concerned about ChatGPT’s capabilities that it has issued a “code red” to protect the company’s search business.

In light of worries that ChatGPT could be abused by hackers with low resources and no technical expertise, it seems the cybersecurity sector, a field that has long been wary about the possible consequences of modern AI, is also paying attention.

Only a few weeks after ChatGPT’s release, the Israeli cybersecurity firm Check Point showed how the web-based chatbot could produce a phishing email that might deliver a malicious payload when combined with OpenAI’s Codex code-writing system.

The chatbot’s apparent capacity to aid cybercriminals in creating dangerous code has also lately prompted a warning from Check Point. The researchers claim to have seen at least three instances of novice hackers bragging about their use of ChatGPT’s AI capabilities for nefarious ends.

On a dark web forum, a hacker displayed ChatGPT code that reportedly stole important files, compressed them, and transmitted them across the internet. Another user published a Python script that they claimed to be their first ever script. Although the malware appeared to be harmless, Check Point stated that it could “simply be updated to encrypt someone’s machine fully without any user intervention.” Check Point claimed that the same forum member has previously offered access to stolen data and servers from compromised businesses.

“Abusers can create code using ChatGPT to mimic adversaries or even automate processes to make work easier. Personalized learning, composing newspaper articles, and writing computer code are just a few of the outstanding jobs it has already been utilised for, according to Laura Kankaala, the threat intelligence head at F-Secure. It should be highlighted, however, that it can be risky to entirely trust the text and code produced by ChatGPT because the code it produces might have security flaws or vulnerabilities. The text generated can potentially contain blatant factual inaccuracies, Kankaala continued, casting doubt on the validity of the ChatGPT code.

As the technology develops, ChatGPT “may soon be able to analyse potential assaults on the fly and provide positive suggestions to boost security,” according to ESET’s Jake Moore.

Not only security experts disagree about the part ChatGPT will play in the development of cybersecurity. When we asked the chatbot the question, we were also interested to hear what ChatGPT had to say.

Leave a Reply

Your email address will not be published. Required fields are marked *