OpenAI’s internal discussion board about AI development was compromised by hackers

0
7


In context: Despite owning one of today’s most successful generative AI products, OpenAI is a highly controversial company with a history of lies, mismanagement, and blatant abuse of people’s biometric features. Now, you can also add hackers potentially stealing secrets about AI development to the lot.

Unknown, malicious actors accessed OpenAI’s internal messaging systems, eavesdropping on sensitive discussions about the company’s AI technologies and research material, however the hackers did not get access to the servers hosting AI source code. Two people familiar with the matter revealed the incident to the New York Times this week, stating that OpenAI didn’t disclose the breach as no customer data was compromised.

The unauthorized access happened in early 2023, according to the sources, and OpenAI executives revealed the incident to employees during an internal meeting at the company’s San Francisco offices. The board of directors was informed as well, but no one outside the company was involved. OpenAI did not contact the FBI or any other law enforcement agency, as the incident wasn’t deemed interesting enough from a national security standpoint.

Some employees expressed fears about potential involvement of foreign threat actors based in China, Russia, or elsewhere, who could have leveraged OpenAI’s generative algorithms to damage U.S. interests and technology. They also accused their own company of neither taking operational security seriously enough nor considering the potential risks related to the aforementioned AI algorithms.

Leopold Aschenbrenner, a technical program manager focused on AI security risks, sent an internal memo to OpenAI’s board of directors about what he considered an inadequate effort to prevent foreign agents from accessing the company’s tech. OpenAI retaliated by letting the manager go, and Aschenbrenner later said that the company fired him after an unreported, unspecified security incident.

The new sources quoted by the NY Times are now confirming and corroborating the breach, while OpenAI just said in an interview that it needs the “best and brightest minds” to work on this AI project and that there are just “some risks” to deal with.

OpenAI competitor Anthropic thinks that worries about AI risks to humanity are greatly exaggerated, even if Beijing’s Communist dictatorship could develop a more advanced version of ChatGPT.

As things stand today, OpenAI has kept on proving that it isn’t particularly deserving of trust from anyone. The company’s management was forced to fire CEO Sam Altman for his alleged “toxic culture of lying” and “psychological abuse,” and Altman is now back at guiding the company he co-founded.

OpenAI’s abuses of copyright, people’s data, and privacy include a ChatGPT voice persona modeled after Scarlett Johansson, which the company later removed after the actress lawyered up.



Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here