Microsoft and OpenAI released a joint report revealing: Worrying rise in the use of artificial intelligence (AI) language models by hacker groups from China, Iran, North Korea and Russia to improve their chances of successful cyber attacks.
According to the report, these government-affiliated groups use AI to understand everything from satellite technology to developing malicious code that can evade detection by cybersecurity software.
Microsoft and OpenAI have identified four different groups using large language models (LLMs) in combination with cyberattacks: Forest Blizzard by Russia, also known as strontium; Emerald leaf North Korea, also known as thallium; Crimson Sandstorm Iran, also known as Curio; and Charcoal Typhoon Chinaknown as Chromium, and Salmon Typhoon, known as Sodium.
The companies stated in the report:
Cybercriminal groups, state-linked threat actors, and other adversaries are researching and testing various AI technologies as they emerge to understand the potential value to their operations and the security controls they may need to bypass.
In the case of the Russian hackers, Microsoft and OpenAI say the group uses LLMs to understand satellite capabilities and radar technologies and to obtain assistance with sequencing and file manipulation tasks.
North Korean company Emerald Sheet has used the technology to better understand vulnerabilities in public software, sequence them, improve social engineering in email phishing and spear phishing campaigns, and learn about groups like think tanks that to address North Korea’s nuclear weapons program.
The Iranian company Crimson Sandstorm has also used the technology for spear-phishing campaigns, developing code and attempting to bypass antivirus programs.
Regarding the Chinese groups Charcoal Typhoon and Salmon Typhoon, Microsoft notes that they have used LLMs for various purposes, from translating and simplifying cyber tasks to detecting coding errors and potentially developing malicious code.
The company said it had disabled the accounts and assets of each of the groups, adding that it had not detected any “significant attacks” using the LLMs it monitored.