OpenAI has recently reported the disruption of over 20 cyber and influence operations in 2023, involving Iranian and Chinese state-sponsored hackers. The company uncovered the activities of three threat actors abusing ChatGPT to launch cyberattacks. One of these actors used ChatGPT to plan ICS attacks, highlighting the evolving threat landscape where AI tools are being leveraged by malicious actors. This indicates the potential for more sophisticated attacks in the future, emphasizing the need for robust security measures to counter these emerging threats. OpenAI has been proactive in detecting and mitigating these malicious activities, highlighting the importance of collaboration between technology companies and cybersecurity researchers in combating these threats. The company is actively working to enhance its security measures to prevent future exploitation of its platforms by malicious actors.
South Korea’s military has accused North Korea of a GPS signal jamming attack. The jamming occurred on Friday and continued into Saturday, impacting various vessels at sea and a significant number of aircraft. This incident underscores the potential threat posed by GPS signal jamming and the vulnerabilities of navigation systems relying on this technology. It also highlights the escalating tensions on the Korean Peninsula, with North Korea’s actions raising concerns about regional security.
OpenAI’s ChatGPT chatbot actively moderated election news and deepfake requests during the recent US election. It provided responses asking users to seek election news elsewhere on November 5th and 6th, rejecting over 2 million requests. Additionally, ChatGPT refused to generate DALL-E images depicting individuals like Donald Trump, blocking over 250,000 requests. This proactive measure aimed to curtail misinformation and the potential spread of deepfakes, reflecting a conscious effort to combat the influence of AI-generated content in elections.