Google: Hackers Tried (and Failed) to Use Gemini AI to Breach Accounts

Google: Hackers Tried (and Failed) to Use Gemini AI to Breach Accounts


Google: Hackers Tried (and Failed) to Use Gemini AI to Breach Accounts

Hacking units from Iran abused Gemini the most, but North Korean and Chinese groups also tried their luck. None made any 'breakthroughs' and mostly used Gemini for mundane tasks.

(Photo by Jaque Silva/NurPhoto via Getty Images)

Google has uncovered dozens of state-sponsored hacking groups trying to use its Gemini AI for nefarious schemes, including creating malware.

So far, none of the activity has led to any groundbreaking cyber threats. "While AI can be a useful tool for threat actors, it is not yet the game-changer it is sometimes portrayed to be,” the company wrote in a blog post on Wednesday. 

Google’s investigation found that state-sponsored hackers from Iran, North Korea, China, and Russia have all been using Gemini for tasks such as translating content, refining phishing attacks, and computer coding.

Google traced the activity to more than 10 Iranian-hacking groups, 20 Chinese government groups, and nine North Korean hacking groups. "Iranian APT (advanced persistent threat) actors were the heaviest users of Gemini, using it for a wide range of purposes, including research on defense organizations, vulnerability research, and creating content for campaigns,” it says.

However, Google says the hackers have only been making “productivity gains” by using Gemini, rather than direct computer hacking. “At present, they primarily use AI for research, troubleshooting code, and creating and localizing content,” the company wrote.

For example, Gemini was able to help the state-sponsored hackers with tasks such as creating content, explaining hard-to-understand concepts, or generating basic computer code. But the chatbot’s safeguards thwarted the state-sponsored groups when it came to more complex tasks, including account hijacking or trying to jailbreak Gemini. 

“Some malicious actors unsuccessfully attempted to prompt Gemini for guidance on abusing Google products, such as advanced phishing techniques for Gmail, assistance coding a Chrome infostealer, and methods to bypass Google's account creation verification methods,” the company’s report adds. “These attempts were unsuccessful. Gemini did not produce malware or other content that could plausibly be used in a successful malicious campaign.”

Still, Google found that Gemini could allow “threat actors to move faster and at higher volume.” For example, an Iranian-based propaganda operation tapped Gemini to localize their content with better translation. Meanwhile, North Korean-linked hackers used the chatbot to help them draft cover letters and ask about jobs on LinkedIn—possibly to help them obtain remote IT worker positions at US companies, a problem federal investigators are trying to stop.  

“The [North Korean] group also used Gemini for information about overseas employee exchanges. Many of the topics would be common for anyone researching and applying for jobs,” Google says. 

The company's report aligns with findings from rival OpenAI. A year ago, it also spotted numerous state-sponsored hackers trying to use ChatGPT for malicious purposes. But OpenAI's investigation found the groups were merely using the chatbot as a productivity tool that amounted to “limited, incremental capabilities for malicious cybersecurity tasks,” rather than anything revolutionary. 

إرسال تعليق

أحدث أقدم

نموذج الاتصال