As millions embrace AI-powered chatbots like ChatGPT, cybersecurity risks associated with generative AI models have become a pressing concern for individuals and businesses alike.
While these generative AL models are designed to facilitate communication and provide helpful responses, experts have raised concerns that these pose great risks of hacking and data breaches that could compromise personal information.
A report by Palo Alto Networks Unit 42 showed recently that ChatGPT-related scams are surging and despite OpenAI (the creator of ChatGPT) giving users a free version of ChatGPT, scammers lead victims to fraudulent websites, claiming they need to pay for these services.
“They might collect and steal the input you provide. In other words, providing anything sensitive or confidential could put you in danger. The chatbot’s responses could also be manipulated to give you incorrect answers or misleading information,” said researchers from Palo Alto Networks Unit 42.
The report observed an increase of 910 per cent in monthly registrations for domains related to ChatGPT between November 2022-April 2023.
AI has long been a part of the cybersecurity industry. However, generative AI and ChatGPT are having a profound impact on the future.
Neelesh Kripalani, CTO of IT services and consulting company Clover Infotech, said: “ChatGPT can impact the cybersecurity landscape through the development of more sophisticated social engineering or phishing attacks. Such attacks are used to trick individuals into divulging sensitive information or taking actions that can compromise their security”.
With the ability to generate convincing and natural-sounding language, “AI language models like ChatGPT could potentially be used to create more convincing and effective social engineering and phishing attacks,” he warned.
OpenAI admitted in March that some users’ payment information may have been exposed when it took ChatGPT offline owing to a bug.
The Microsoft-backed company took ChatGPT offline due to a bug in an open-source library which allowed some users to see titles from another active user’s chat history.
OpenAI discovered that the bug may have caused the unintentional visibility of “payment-related information of 1.2 per cent of the ChatGPT Plus subscribers who were active during a specific nine-hour window”.
OpenAI then launched a bug bounty programme for ChatGPT and other products, offering up to $20,000 to security researchers to help the company distinguish between good-faith hacking and malicious attacks, as it suffered a security breach.
In addition to cybersecurity risks, it’s also key to understand that ChatGPT may trigger identity misuse of people.
In an unusual incident, ChatGPT falsely named an innocent and highly-respected law professor in the US on the list of legal scholars who had sexually harassed students in the past as part of a research study.
Jonathan Turley, Shapiro Chair of Public Interest Law at George Washington University, was left shocked when he realised ChatGPT named him as part of a research project on legal scholars who sexually harassed someone.
“The programme promptly reported that I had been accused of sexual harassment in a 2018 Washington Post article after groping law students on a trip to Alaska,” Turley said. As a matter of fact, he has never taken students to Alaska, and The Post never published such an article.
Turley said he has “never been accused of sexual harassment or assault by anyone”.
The US Federal Trade Commission (FTC) Chair Lina Khan warned that modern AI technologies like ChatGPT can be used to “turbocharge” fraud.
“AI presents a whole set of opportunities, but also presents a whole set of risks,” Khan told the House representatives last month.
“I think we’ve already seen ways in which it could be used to turbocharge fraud and scams. We’ve been putting market participants on notice that instances in which AI tools are effectively being designed to deceive people can place them on the hook for FTC action,” she stated.
A number of well-known AI researchers, including Twitter CEO Elon Musk and Steve Wozniak, Co-founder of Apple, signed an open letter urging AI labs around the world to halt the development of large-scale AI systems, citing concerns about the “profound risks to society and humanity” that this software is alleged to pose.
Moreover, Meta (formerly Facebook) discovered malware creators who are taking advantage of the public’s interest in ChatGPT and using this interest to entice users into downloading harmful applications and browser extensions.
The company said they have found around 10 malware families posing as ChatGPT and similar tools to compromise accounts across the internet.
Meta also detected and blocked over 1,000 of these unique malicious URLs from being shared on their apps.