Researchers have an ‘AI chatbot’ warning for you

0
At a time when AI chatbots are being integrated by various companies in their services, researchers have warned people to avoid using chatbots that don’t appear on a company’s website or app.
According to the Norton Consumer Cyber Safety Pulse report, cybercriminals are now capable of creating deepfake chatbots, opening another way for threat actors to target less tech-savvy people. Researchers warn that those using chatbots should not provide any personal information while chatting online.
“I’m excited about large language models like ChatGPT, however, I’m also wary of how cybercriminals can abuse it. We know cybercriminals adapt quickly to the latest technology, and we’re seeing that ChatGPT can be used to quickly and easily create convincing threats,” said Kevin Roundy, senior technical director of Norton.

Hackers impersonate legitimate chatbots
The report said that the chatbots created by hackers can impersonate humans or legitimate sources, like a bank or government entity. They can then manipulate victims into giving their personal information to steal money or commit fraud.
Researchers noted that people should avoid clicking any links in response to unsolicited phone calls, emails or messages.
Hackers using ChatGPT to generate threats
Norton also highlighted that cybercriminals are using ChatGPT to generate malicious threats “through its impressive ability to generate human-like text that adapts to different languages and audiences.”

“Cybercriminals can now quickly and easily craft email or social media phishing lures that are even more convincing, making it more difficult to tell what’s legitimate and what’s a threat,” Norton added.
Earlier this year, a research conducted by BlackBerry found that AI chatbots can be used against organisations in the form of AI-infused cyberattacks in the next 12 to 24 months.
“Some think that could happen in the next few months. And more than three-fourths of respondents (78%) predict a ChatGPT credited attack will certainly occur within two years. In addition, a vast majority (71%) believe nation-states may already be leveraging ChatGPT for malicious purposes,” the report found.

FOLLOW US ON GOOGLE NEWS

 

Read original article here

Denial of responsibility! TechnoCodex is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave a comment