Meta, the parent company of Facebook, Instagram, and WhatsApp, warned on Wednesday that hackers are taking advantage of the interest generated by new artificial intelligence (AI) tools, such as ChatGPT, to induce internet users to install malicious code on their devices.
In April, security analysts at the social media giant found malicious software posing as ChatGPT or similar AI tools, Meta’s director of information security, Guy Rosen, told reporters. He recalled that malicious actors (hackers, spammers, etc.) are always on the lookout for the latest trends that “capture the imagination” of the public, such as ChatGPT. This interface from OpenAI, which allows for seamless dialogue with humans to generate code and text such as email messages and dissertations, has generated great excitement.
Rosen said that Meta has detected fake internet browser extensions claiming to contain generative AI tools, but actually containing malicious software designed to infect devices. It is common for malicious actors to capture the interest of internet users with flashy developments, tricking people into clicking on web links with explosive traps or downloading programs that steal data.
“We’ve seen this in other popular topics, such as scams motivated by immense interest in digital currencies,” Rosen said. “From the perspective of a malicious actor, ChatGPT is the new cryptocurrency,” he noted.
Meta detected and blocked over a thousand web addresses that advertised promising tools similar to ChatGPT, but were actually traps set by hackers, according to the company’s security team.
Meta has not yet seen generative AI used for anything more than bait by hackers, but is preparing for it to be used as a weapon, something it considers inevitable, Rosen said.
“Generative AI is very promising and malicious actors know it, so we all need to be very vigilant,” he said.
At the same time, Meta teams are exploring ways to use generative AI to defend against hackers and their deceptive online campaigns.
“We have teams already thinking about how (generative AI) could be abused and the defenses we need to implement to counter that,” said Meta’s head of security policy, Nathaniel Gleicher, during the same briefing.
“We are preparing for that,” Gleicher said.
The rise of generative AI has opened up a new avenue for cybercriminals to exploit, and companies like Meta are at the forefront of defending against such threats. As the technology continues to advance, it is important for individuals and organizations to remain vigilant and stay informed about potential threats posed by malicious actors.
By taking proactive steps to protect themselves and their devices, internet users can help ensure that they are not unwittingly falling prey to cybercrime.