- OpenAI has appointed former NSA Director Paul Nakasone to its board of directors.
- Nakasone's hiring is aimed at strengthening AI security, but it has also raised concerns about surveillance.
- The company's internal safety group has also effectively been disbanded.
There are creepy undercover guards outside the office, the former director of the NSA was just appointed to the board, and the internal working group for promoting the safe use of artificial intelligence has effectively been disbanded.
It feels like OpenAI is becoming a little less open every day.
In its latest surprise move, the company announced Friday that it had appointed former NSA director Paul Nakasone to its board of directors.
In addition to leading the NSA, Nakasone also served as head of the U.S. Cyber Command, the Pentagon's cybersecurity arm. OpenAI said its hiring of Nakasone demonstrates the company's “commitment to safety and security” and underscores the importance of cybersecurity as AI continues to evolve.
“OpenAI's dedication to its mission is highly consistent with my own values and experience as a public servant,” Nakasone said in a statement. “I look forward to contributing to OpenAI's efforts to ensure that artificial general intelligence is safe and beneficial for people around the world.”
But critics worry that Nakasone's hiring could mean oversight.
Edward Snowden, the US whistleblower who leaked classified documents about surveillance in 2013, said in a post on X that Nakasone's hiring was a “planned betrayal of the rights of all people on Earth”.
“They have completely unmasked. Never trust OpenAI or any of its products (such as ChatGPT),” Snowden wrote.
In another comment to X, Snowden said the “combination of the vast amounts of mass surveillance data accumulated over the last 20 years with AI will put truly terrifying powers in the hands of a small, unaccountable number of people.”
Meanwhile, Sen. Mark Warner, D-Va., chairman of the Senate Intelligence Committee, called Nakasone's hiring a “major accomplishment.”
“There is no one more respected in the entire security industry,” Warner told Axios.
Nakasone's security expertise may be needed at OpenAI, where critics worry security issues could leave it vulnerable to attack.
In April, OpenAI fired its former director, Leopold Aschenbrenner, after he sent a memo detailing a “major security incident” in which he called the company's security “grossly inadequate” to prevent theft by foreign powers.
Shortly thereafter, OpenAI's Superalignment team, which was focused on developing AI systems aligned with human interests, was abruptly disbanded after two of the company's most prominent safety researchers resigned.
“For quite some time, I have not seen eye-to-eye with OpenAI's leadership on the company's core priorities,” said Jan Reicke, one of the departing researchers.
OpenAI's chief scientist, Ilya Sutskever, who originally built the SuperAlignment team, hasn't said much about his reasons for leaving, but company insiders say he was in a precarious position due to his involvement in the failed attempt to oust CEO Sam Altman. Sutskever didn't like Altman's aggressive approach to AI development, which he said was fueling a power struggle.
And as if that wasn't bad enough, even locals who live and work near OpenAI's San Francisco offices say the company is starting to creep them out: A cashier at a nearby pet store told the San Francisco Standard that there's an “atmosphere of secrecy” at the company's offices.
Several employees at nearby businesses said they saw masked security-like men standing outside their buildings but declined to say they worked for OpenAI.
“[OpenAI] “They're not bad neighbors,” one person said, “but they are secretive.”