ChatGPT can be used to generate malicious code, finds research

0

OpenAI’s ChatGPT, the large language model (LLM)-based artificial intelligence (AI) text generator, can be seemingly used to generate code for malicious tasks, a research note by cyber security firm Check Point observed on Tuesday. Researchers at Check Point used ChatGPT and Codex, a fellow OpenAI natural language to code generator, used standard English instructions to create code that can be used to launch spear phishing attacks.

The biggest issue with such AI code generators lie in the fact that the natural language processing (NLP) tools can lower the entry barrier for hackers with malicious intent. With the code generators not needing users to be well versed with coding, any user can collate the logical flow of information that is used in a malicious tool from the open web, and use the same logic to generate syntax for malicious tools.

Demonstrating the issue, Check Point showcased how the AI code generator was used to create a basic code template for a phishing email scam, and apply subsequent instructions in plain English to keep improving the code. In what the attackers demonstrated, any user with malicious intent can therefore create an entire hacking campaign by using these tools.

Sergey Shykevich, threat intelligence group manager at Check Point, said that tools such as ChatGPT have “potential to significantly alter the cyber threat landscape.”

“Hackers can also iterate on malicious code with ChatGPT and Codex. AI technologies represent another step forward in the dangerous evolution of increasingly sophisticated and effective cyber capabilities,” he added.

To be sure, while open source language models can also be used to create cyber defence tools, the lack of protection in terms of its usage to generate malicious tools could be potentially alarming. Check Point noted that while ChatGPT does state that the usage of its platform to create hacking tools is “against” its policy, there are no restrictions that prevent it from doing so.

This is hardly the first time that an AI language and image rendering service has shown potential for misuse. Lensa, an AI-based image editing and modification tool by US-based Prisma, also highlighted how the lack of filtering based on body image and nudity could lead to privacy-nullifying images created of an individual, without consent.

Catch all the Technology News and Updates on Live Mint.
Download The Mint News App to get Daily Market Updates & Live Business News.

More
Less

FOLLOW US ON GOOGLE NEWS

 

Read original article here

Denial of responsibility! TechnoCodex is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave a comment