The creator of the artificial intelligence (AI) chatbotChatGPT, OpenAI, has worked with Microsoft, one of its biggest supporters, to stop five cyberattacks connected to various malicious people. A report published on Wednesday claims that Microsoft has been keeping an eye on hacking groups associated with the Chinese and North Korean governments, the Iranian Revolutionary Guard, and Russian military intelligence. The article claims that these groups have been investigating the use of artificial intelligence large language models (LLMs) in their hacking tactics.LLMs use large text datasets to generate responses that have a human-like voice.
OpenAI’s Discovery and Response to State-Sponsored Hacking Attempts
According to OpenAI, two Chinese-related groups, namely the Charcoal Typhoon and the Salmon Typhoon, have identified as the sources of the five cyberattacks. These attacks are linked through Crimson Sandstorm to North Korea and through Emerald Sleet to Russia.
Furthermore, the connections between these groups underscore the complexity of cybersecurity threats in today’s interconnected world.
According to OpenAI, the groups attempted to use ChatGPT-4 to explore business and cybersecurity tools, debug code, create scripts, carry out phishing campaigns, translate technical papers, avoid malware detection, and research satellite communication and radar technology. After discovering them, terminated the accounts.
The company disclosed this while enforcing a broad prohibition on state-sponsored hacking groups that use AI technology. Moreover, OpenAI successfully prevented these incidents, but it acknowledged the difficulty of preventing any improper usage of its programs.
AI Safety Initiatives
Legislators increased their monitoring of generative AI developers after the introduction of ChatGPT. In response to a spike in deepfakes and scams created by AI. OpenAI launched a $1 million cybersecurity award program in June 2023 to improve and quantify the effects of AI-driven cybersecurity technologies.
Hackers have discovered ways to get beyond OpenAI’s cybersecurity efforts and put controls in place to stop ChatGPT from producing offensive or dangerous responses. As a result, the chatbot is now capable of producing such content.
The Biden Administration recently worked with more than 200 organizations to create the United States AI Safety Institute Consortium and the AI Safety Institute, with notable participants such as OpenAI, Microsoft, Anthropic, and Google. Additionally, this initiative reflects a concerted effort to address concerns surrounding AI safety and ethics on a national level. President Joe Biden’s executive order on AI safety, which issued in late October 2023 and intends to battle deepfakes created by AI, address cybersecurity concerns, and support the safe development of AI, is what led to the formation of these groups.