Pierluigi Paganini@securityaffairs.com
//
OpenAI is actively combating the misuse of its AI tools, including ChatGPT, by malicious groups from countries like China, Russia, and Iran. The company recently banned multiple ChatGPT accounts linked to these threat actors, who were exploiting the platform for illicit activities. These banned accounts were involved in assisting with malware development, automating social media activities to spread disinformation, and conducting research on sensitive topics such as U.S. satellite communications technologies.
OpenAI's actions highlight the diverse ways in which malicious actors are attempting to leverage AI for their campaigns. Chinese groups used AI to generate fake comments and articles on platforms like TikTok and X, posing as real users to spread disinformation and influence public opinion. North Korean actors used AI to craft fake resumes and job applications in an attempt to secure remote IT jobs and potentially steal data. Russian groups employed AI to develop malware and plan cyberattacks, aiming to compromise systems and exfiltrate sensitive information. The report also details specific operations like ScopeCreep, where a Russian-speaking threat actor used ChatGPT to develop and refine Windows malware. They also use AI to debug code in multiple languages and setup their command and control infrastructure. This malware was designed to escalate privileges, establish stealthy persistence, and exfiltrate sensitive data while evading detection. OpenAI's swift response and the details revealed in its report demonstrate the ongoing battle against the misuse of AI and the proactive measures being taken to safeguard its platforms. Recommended read:
References :
Pierluigi Paganini@securityaffairs.com
//
OpenAI is facing scrutiny over its ChatGPT user logs due to a recent court order mandating the indefinite retention of all chat data, including deleted conversations. This directive stems from a lawsuit filed by The New York Times and other news organizations, who allege that ChatGPT has been used to generate copyrighted news articles. The plaintiffs believe that even deleted chats could contain evidence of infringing outputs. OpenAI, while complying with the order, is appealing the decision, citing concerns about user privacy and potential conflicts with data privacy regulations like the EU's GDPR. The company emphasizes that this retention policy does not affect ChatGPT Enterprise or ChatGPT Edu customers, nor users with a Zero Data Retention agreement.
Sam Altman, CEO of OpenAI, has advocated for what he terms "AI privilege," suggesting that interactions with AI should be afforded the same privacy protections as communications with professionals like lawyers or doctors. This stance comes as OpenAI faces criticism for not disclosing to users that deleted and temporary chat logs were being preserved since mid-May in response to the court order. Altman argues that retaining user chats compromises their privacy, which OpenAI considers a core principle. He fears that this legal precedent could lead to a future where all AI conversations are recorded and accessible, potentially chilling free expression and innovation. In addition to privacy concerns, OpenAI has identified and addressed malicious campaigns leveraging ChatGPT for nefarious purposes. These activities include the creation of fake IT worker resumes, the dissemination of misinformation, and assistance in cyber operations. OpenAI has banned accounts linked to ten such campaigns, including those potentially associated with North Korean IT worker schemes, Beijing-backed cyber operatives, and Russian malware distributors. These malicious actors utilized ChatGPT to craft application materials, auto-generate resumes, and even develop multi-stage malware. OpenAI is actively working to combat these abuses and safeguard its platform from being exploited for malicious activities. Recommended read:
References :
iHLS News@iHLS
//
OpenAI has revealed that state-linked groups are increasingly experimenting with artificial intelligence for covert online operations, including influence campaigns and cyber support. A newly released report by OpenAI highlights how these groups, originating from countries like China, Russia, and Cambodia, are misusing generative AI technologies, such as ChatGPT, to manipulate content and spread disinformation. The company's latest report outlines examples of AI misuse and abuse, emphasizing a steady evolution in how AI is being integrated into covert digital strategies.
OpenAI has uncovered several international operations where its AI models were misused for cyberattacks, political influence, and even employment scams. For example, Chinese operations have been identified posting comments on geopolitical topics to discredit critics, while others used fake media accounts to collect information on Western targets. In one instance, ChatGPT was used to draft job recruitment messages in multiple languages, promising victims unrealistic payouts for simply liking social media posts, a scheme discovered accidentally by an OpenAI investigator. Furthermore, OpenAI shut down a Russian influence campaign that utilized ChatGPT to produce German-language content ahead of Germany's 2025 federal election. This campaign, dubbed "Operation Helgoland Bite," operated through social media channels, attacking the US and NATO while promoting a right-wing political party. While the detected efforts across these various campaigns were limited in scale, the report underscores the critical need for collective detection efforts and increased vigilance against the weaponization of AI. Recommended read:
References :
@siliconangle.com
//
OpenAI is facing increased scrutiny over its data retention policies following a recent court order related to a high-profile copyright lawsuit filed by The New York Times in 2023. The lawsuit alleges that OpenAI and Microsoft Corp. used millions of the Times' articles without permission to train their AI models, including ChatGPT. The paper further alleges that ChatGPT outputted Times content verbatim without attribution. As a result, OpenAI has been ordered to retain all ChatGPT logs, including deleted conversations, indefinitely to ensure that potentially relevant evidence is not destroyed. This move has sparked debate over user privacy and data security.
OpenAI COO Brad Lightcap announced that while users' deleted ChatGPT prompts and responses are typically erased after 30 days, this practice will cease to comply with the court order. The retention policy will affect users of ChatGPT Free, Plus, and Pro, as well as users of OpenAI's application programming interface (API), but not those using the Enterprise or Edu editions or those with a Zero Data Retention agreement. The company asserts that the retained data will be stored separately in a secure system accessible only by a small, audited OpenAI legal and security team, solely to meet legal obligations. The court order was granted within one day of the NYT's request due to concerns that users might delete chats if using ChatGPT to bypass paywalls. OpenAI CEO Sam Altman has voiced strong opposition to the court order, calling it an "inappropriate request" and stating that OpenAI will appeal the decision. He argues that AI interactions should be treated with similar privacy protections as conversations with a lawyer or doctor, suggesting the need for "AI privilege". The company also expressed concerns about its ability to comply with the European Union's General Data Protection Regulation (GDPR), which grants users the right to be forgotten. Altman pledged to fight any demand that compromises user privacy, which he considers a core principle, promising customers that the company will fight to protect their privacy at every step if the plaintiffs continue to push for access. Recommended read:
References :
Nick Lucchesi@laptopmag.com
//
OpenAI is planning to evolve ChatGPT into a "super-assistant" that understands users deeply and becomes their primary interface to the internet. A leaked internal document, titled "ChatGPT: H1 2025 Strategy," reveals that the company envisions ChatGPT as an "entity" that users rely on for a vast range of tasks, seamlessly integrated into various aspects of their daily lives. This includes tasks like answering questions, finding a home, contacting a lawyer, planning vacations, managing calendars, and sending emails, all aimed at making life easier for the user.
The document, dated in late 2024, describes the "super-assistant" as possessing "T-shaped skills," meaning it has broad capabilities for tedious daily tasks and deep expertise for more complex tasks like coding. OpenAI aims to make ChatGPT personalized and available across various platforms, including its website, native apps, phones, email, and even third-party surfaces like Siri. The goal is for ChatGPT to act as a smart, trustworthy, and emotionally intelligent assistant capable of handling any task a person with a computer could do. While the first half of 2025 was focused on building ChatGPT as a "super assistant", plans are now shifting to generating "enough monetizable demand to pursue these new models." OpenAI sees ChatGPT less as a tool and more as a companion for surfing the web, helping with everything from taking meeting notes and preparing presentations to catching up with friends and finding the best restaurant. The company's vision is for ChatGPT to be an integral part of users' lives, accessible no matter where they are. Recommended read:
References :
Kara Sherrer@eWEEK
//
OpenAI, in collaboration with former Apple designer Jony Ive, is reportedly developing a new AI companion device. CEO Sam Altman hinted at the project during a staff meeting, describing it as potentially the "biggest thing" OpenAI has ever undertaken. This partnership involves Ive's startup, io, which OpenAI plans to acquire for a staggering $6.5 billion, potentially adding $1 trillion to OpenAI's valuation. Ive is expected to take on a significant creative and design role at OpenAI, focusing on the development of these AI companions.
The AI device, though shrouded in secrecy, is intended to be a "core device" that seamlessly integrates into daily life, much like smartphones and laptops. It's designed to be aware of a user's surroundings and routines, aiming to wean users off excessive screen time. The device is not expected to be a phone, glasses, or wearable, but rather something small enough to sit on a desk or fit in a pocket. Reports suggest the prototype resembles an iPod Shuffle and could be worn as a necklace, connecting to smartphones and PCs for computing and display capabilities. OpenAI aims to release the device by the end of 2026, with Altman expressing a desire to eventually ship 100 million units. With this venture, OpenAI is directly challenging tech giants like Apple and Google in the consumer electronics market, despite currently lacking profitability. While the success of the AI companion device is not guaranteed, given past failures of similar devices like the Humane AI Pin, the partnership between OpenAI and Jony Ive has generated significant buzz and high expectations within the tech industry. Recommended read:
References :
@www.theatlantic.com
//
References:
Last Week in AI
, www.marketingaiinstitute.com
,
OpenAI has reversed its decision to transition into a fully for-profit entity, opting instead to restructure as a public benefit corporation (PBC). This dramatic pivot was influenced by legal and civic discussions, signaling a significant shift in the company's approach to artificial general intelligence (AGI) development and funding. Initially founded as a counter to the perils of prioritizing profit in the development of powerful AI, a newly obtained letter from OpenAI lawyers to California Attorney General Rob Bonta reveals the company's concern over anything that might hinder its ability to raise substantial capital.
The decision to remain under the control of its non-profit board comes after facing backlash from various stakeholders, including civic leaders and the offices of the Attorney General of Delaware and California. The shift to a PBC structure is aimed at balancing the interests of shareholders with the company's mission of ensuring that AGI benefits humanity. This move acknowledges the need for greater transparency and accountability in AI development, while also navigating the complex landscape of attracting investment and fostering innovation. OpenAI's restructured commercial arm, operating as a public benefit corporation, will be legally obligated to consider broader social and environmental goals, while still pursuing profit. This pragmatic evolution reflects OpenAI's recognition that the path to achieving its ambitious goals requires a more nuanced approach, addressing both financial sustainability and societal impact. The decision could have profound implications for the future of AI funding, AGI development, and global social systems, possibly setting the stage for the creation of the most powerful non-profit in human history. Recommended read:
References :
|