“The problem is Sam Altman”: OpenAI Insiders don’t trust CEO

“the-problem-is-sam-altman”:-openai-insiders-don’t-trust-ceo
“The problem is Sam Altman”: OpenAI Insiders don’t trust CEO

OpenAI brainstorms ways AI can benefit humanity in effort to counter bad vibes.

On the same day that OpenAI released policy recommendations to ensure that AI benefits humanity if superintelligence is ever achieved, The New Yorker dropped a massive investigation into whether CEO Sam Altman can be trusted to actually follow through on OpenAI’s biggest promises.

Parsing the publications side by side can be disorienting.

On the one hand, OpenAI said it plans to push for policies to “keep people first” as AI starts “outperforming the smartest humans even when they are assisted by AI.” To achieve this, the company vows to remain “clear-eyed” and transparent about risks, which it acknowledged includes monitoring for extreme scenarios like AI systems evading human control or governments deploying AI to undermine democracy. Without proper mitigation of such risks, “people will be harmed,” OpenAI warned, before describing how the company could be trusted to advocate for a future where achieving superintelligence means a “higher quality of life for all.”

On the other hand, The New Yorker interviewed more than 100 people familiar with how Altman conducts business. The publication also reviewed internal memos and interviewed Altman more than 12 times. The resulting story provides a lengthy counterpoint explaining why the public may struggle to trust OpenAI’s CEO to “control the future” of AI, no matter how rosy the company’s vision may appear.

Overall, insiders painted Altman as a people-pleaser who tells others what they want to hear while questing for power in an alleged bid to always put himself first. As one board member summed up Altman, he has “two traits that are almost never seen in the same person. The first is a strong desire to please people, to be liked in any given interaction. The second is almost a sociopathic lack of concern for the consequences that may come from deceiving someone.”

While The New Yorker found no “smoking gun,” its reporters reviewed messages from OpenAI’s former chief scientist, Ilya Sutskever, and former research head, Dario Amodei, that documented “an accumulation of alleged deceptions and manipulations.” Many of the incidents could be shrugged off individually, but when taken together, both men concluded that Altman was not fostering a safe environment for advanced AI, The New Yorker reported.

“The problem with OpenAI,” Amodei wrote, “is Sam himself.”

OpenAI’s worried public is souring on AI

Altman either disputed claims in the story or else claimed to have forgotten about certain events. He also attributed some of his shifting narratives to the changing landscape of AI and admitted that he’s been conflict-avoidant in the past.

But his seeming contradictions are getting harder to ignore as scrutiny of OpenAI intensifies amid growing government reliance on its models and lawsuits labeling its tech as unsafe.

Perhaps most visibly to the public, Altman has recently shifted away from positioning OpenAI as a sort of savior blocking AI doomsday scenarios, instead adopting a “tone” of “ebullient optimism,” The New Yorker reported.

The policy recommendations echo this at times. Discussing the recommendations—which include experimenting with shorter workweeks and creating a public wealth fund to share AI profits—OpenAI’s chief global affairs officer, Chris Lehane, confirmed to The Wall Street Journal that the company is urgently concerned about negative public opinions about AI. While announcing their big ideas to spare humanity from AI dangers, OpenAI also promoted “a pilot program of fellowships and focused research grants of up to $100,000 and up to $1 million in API credits for work that builds on these and related policy ideas.”

However, The New Yorker’s report makes it easier to question whether the recommendations were rolled out to distract from mounting public fears about child safety, job displacement, or energy-guzzling data centers. One recent Harvard/MIT poll found that Americans’ biggest concern is that powering AI will hurt their quality of life, Axios reported. Ultimately, these concerns might sway votes for Democrats and Republicans ahead of the midterm elections, the WSJ noted, as data center moratoriums that could slow AI advancement are gaining traction.

For Altman and his company, getting the public to buy into their vision of AI at this critical juncture likely feels essential, since Republicans losing control of Congress could pave the way for stricter AI safety laws that The New Yorker noted that Altman has privately lobbied against.

Without trust in Altman, it’s likely a much harder sell to convince the public that OpenAI isn’t simply saying whatever it will take to entrench its own dominance, the New Yorker suggested.

What exactly is OpenAI pitching?

“We don’t have all, or even most of the answers,” OpenAI said. Instead, the company characterized its “industrial policy for the intelligence age” as “initial ideas for an industrial policy agenda to keep people first during the transition to superintelligence.”

Calling for “common-sense” regulations and a public-private partnership to quickly iterate on successes, OpenAI pitched “ambitious” policy ideas to ensure that everyone can access AI and profit from it. Its bushy-tailed vision acknowledged that it hopes to achieve what society never did: guarantee Internet access and ensure AI is “fairly deployed” across the US, with everyone trained to use it.

Worker protections are a focus of OpenAI’s plan. Recommendations included involving workers in discussions on how AI systems work to improve productivity and make workplaces safer, as well as on how to “set clear limits on harmful uses of AI.” OpenAI also suggested creating a tax on automated labor that could be used to fund core programs like Social Security, Medicaid, SNAP, and housing assistance as companies rely less on human labor. Among other enticing ideas was a plan to “incentivize employers and unions to run time-bound 32-hour/four-day workweek pilots with no loss in pay that hold output and service levels constant, then convert reclaimed hours into a permanent shorter week, bankable paid time off, or both.”

Additionally, OpenAI proposed a “public wealth fund” that “provides every citizen—including those not invested in financial markets—with a stake in AI-driven economic growth.”

“Returns from the Fund could be distributed directly to citizens, allowing more people to participate directly in the upside of AI-driven growth, regardless of their starting wealth or access to capital,” OpenAI said.

As AI takes on more tasks, humans can gravitate toward care-centric work, OpenAI suggested, recommending policy ideas to help displaced workers get training to work in health care, elderly care, daycare, or community service settings. To ensure people are attracted to those roles—historically undervalued as women’s work—OpenAI suggested initiatives to help society recognize that caregiving is “economically valuable work.”

Human workers will also be needed to use AI to accelerate scientific advancements, OpenAI said.

However, all these public benefits that OpenAI promises can only be realized if we build a “resilient society” that can quickly respond to risky implementations and “keep AI safe, governable, and aligned with democratic values,” the company said.

That aspect of OpenAI’s vision requires firms like OpenAI to develop safety systems, among other efforts, that will help improve public trust in AI. And we should trust those systems will work and only interfere with these firms when actual dangers are looming, OpenAI seems to suggest.

“As we progress toward superintelligence, there may come a point where a narrow set of highly capable models—particularly those that could materially advance chemical, biological, radiological, nuclear, or cyber risks—require stronger controls,” OpenAI said.

When that day arrives, OpenAI opined, there should be a global network in place to communicate emerging risks. However, only the firms with the most advanced models should be subjected to rigorous audits, so that smaller firms can still compete. That’s the path to ensure no firm’s dominant position can be abused to unfairly shut down rivals or weaken democratic values, OpenAI said, while insisting that public input is vital to AI’s success.

Altman has previously persuaded “a tech-skeptical public that their priorities, even when mutually exclusive, are also his priorities,” The New Yorker reported. But for the public, which is already reporting alleged harms from OpenAI models, it might be getting harder to entertain lofty ideas from a company that is led by “the greatest pitchman of his generation,” The New Yorker reported.

One OpenAI researcher told The New Yorker that Altman’s promises can sometimes seem like a stopgap to overcome criticism until he reaches the next benchmark. When it comes to superintelligence, some optimistic experts think it could take two years, which is longer than Elon Musk stayed at OpenAI before famously criticizing Altman’s leadership and leaving to start his own AI firm.

Altman “sets up structures that, on paper, constrain him in the future,” the OpenAI researcher told The New Yorker. “But then, when the future comes and it comes time to be constrained, he does away with whatever the structure was.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

47 Comments

Leave a Reply

Your email address will not be published. Required fields are marked *