The conundrum for the fixer: The unattainable quest of Chris Lehane and OpenAI

The conundrum for the fixer: The unattainable quest of Chris Lehane and OpenAI

Chris Lehane excels at making unfavorable news vanish. Having been Al Gore’s press secretary during the Clinton administration and Airbnb’s top crisis manager through every regulatory challenge imaginable, Lehane understands the art of spinning narratives. He is now two years into what could be his most daunting role yet: as OpenAI’s VP of global policy, where he must persuade the world that OpenAI genuinely cares about democratizing artificial intelligence, even as the company increasingly acts like any other tech behemoth claiming to be unique.

I spent 20 minutes with him onstage at the Elevate conference in Toronto earlier in the week—20 minutes to move beyond prepared statements and delve into the genuine conflicts undermining OpenAI’s meticulously crafted image. It proved challenging and not entirely successful. Lehane is exceptionally skilled at his job. He’s affable. He projects reason. He acknowledges uncertainty. He even mentions waking at 3 a.m., concerned about whether any of this will truly benefit humanity.

However, noble intentions mean little when your organization is issuing subpoenas to critics, depleting water and electricity from economically disadvantaged towns, and resurrecting deceased celebrities to solidify your market position.

The company’s Sora issue is essentially the core of every other problem. The video generation tool debuted last week with copyrighted content seemingly integrated into it. This was a daring step for a company already facing lawsuits from the New York Times, the Toronto Star, and much of the publishing sector. From a business and marketing perspective, it was also ingenious. The invite-only application quickly rose to the top of the App Store as users produced digital versions of themselves, OpenAI CEO Sam Altman, characters like Pikachu and Cartman from “South Park,” and deceased celebrities like Tupac Shakur.

When asked about OpenAI’s rationale for launching the latest iteration of Sora with these characters, Lehane suggested that Sora is a “general purpose technology,” akin to the printing press, democratizing creativity for individuals lacking talent or resources. He even stated that he—a self-proclaimed creative novice—can now create videos.

What he avoided mentioning is that OpenAI initially “allowed” rights holders to opt out of having their content used to train Sora, which deviates from standard copyright practice. Subsequently, after OpenAI observed the popularity of using copyrighted images, it “evolved” toward an opt-in approach. This isn’t iteration; it’s gauging the limits of what you can get away with. (Incidentally, despite the Motion Picture Association’s recent legal rumblings, OpenAI appears to have gotten away with a considerable amount.)

Unsurprisingly, this situation echoes the frustration of publishers accusing OpenAI of training its models on their content without sharing the financial gains. When I questioned Lehane about publishers being excluded from the economic benefits, he cited fair use, the American legal principle designed to balance the rights of creators with the public’s access to information. He dubbed it the secret weapon of U.S. tech dominance.

Techcrunch event

San Francisco
|
October 27-29, 2025

Perhaps. However, I had recently interviewed Al Gore—Lehane’s former superior—and realized that anyone could simply query ChatGPT about it rather than reading my article on TechCrunch. “It’s ‘iterative’,” I remarked, “but it’s also a replacement.”

Lehane listened and abandoned his prepared response. “We’re all going to have to sort this out,” he stated. “It’s easy to say we need to develop new economic revenue models. But I believe we will.” (What I inferred was: we’re figuring things out as we progress.)

Then there’s the infrastructure issue that nobody wants to address honestly. OpenAI is already operating a data center campus in Abilene, Texas, and recently started construction on a sizable data center in Lordstown, Ohio, in collaboration with Oracle and SoftBank. Lehane has likened the adoption of AI to the emergence of electricity—noting that those who accessed it later are still trying to catch up—yet OpenAI’s Stargate project seems to be targeting some of those same economically struggling regions to establish facilities that require substantial amounts of water and electricity.

When asked whether these communities will benefit or simply bear the costs, Lehane shifted to gigawatts and geopolitics. He mentioned that OpenAI needs approximately one gigawatt of energy per week, noting that China added 450 gigawatts last year, plus 33 nuclear facilities. If democracies desire democratic AI, he argued, they must compete. “The optimist in me thinks this will modernize our energy systems,” he said, depicting a vision of re-industrialized America with upgraded power grids.

It was inspiring, but it didn’t address whether the residents of Lordstown and Abilene will see their utility bills rise while OpenAI creates videos of The Notorious B.I.G. It’s crucial to note that video generation is the most energy-intensive form of AI.

There’s also a human toll, highlighted the day before our interview when Zelda Williams posted on Instagram, pleading with strangers to cease sending her AI-generated videos of her late father, Robin Williams. “You’re not creating art,” she wrote. “You’re turning human lives into disgusting, over-processed hotdogs.”

When I inquired about how the company addresses this kind of personal harm in relation to its mission, Lehane responded by discussing processes, including responsible design, testing frameworks, and government partnerships. “There isn’t a guide for any of this, right?”

Lehane displayed vulnerability at times, stating that he recognizes the “significant responsibilities” that accompany OpenAI’s activities.

Regardless of whether those moments were intended for the audience, I believe him. In fact, I left Toronto feeling as though I had witnessed a masterclass in political communication—Lehane navigating an impossible task while evading questions about company choices that, for all I know, he might not even support. Then news emerged that further complicated the already complex situation.

Nathan Calvin, a lawyer specializing in AI policy at the nonprofit advocacy group Encode AI, revealed that while I was interviewing Lehane in Toronto, OpenAI had sent a sheriff’s deputy to Calvin’s home in Washington, D.C., during dinnertime to deliver a subpoena. They sought his private communications with California legislators, college students, and former OpenAI employees.

Calvin alleges that the action was part of OpenAI’s intimidation strategies related to a new AI regulation, California’s SB 53. He asserts that the company exploited its ongoing legal dispute with Elon Musk as a pretext to target critics, suggesting Encode was secretly funded by Musk. Calvin added that he opposed OpenAI’s stance on California’s SB 53, an AI safety bill, and that when he saw OpenAI claim that it “worked to improve the bill,” he “literally laughed out loud.” In a social media thread, he specifically referred to Lehane as the “master of the political dark arts.”

In Washington, that might be seen as a compliment. However, at a company like OpenAI, whose mission is “to build AI that benefits all of humanity,” it seems like an accusation.

What’s more significant is that even OpenAI’s own employees are conflicted about the company’s direction.

As my colleague Max reported last week, numerous current and former employees expressed their concerns on social media following the release of Sora 2. Among them was Boaz Barak, an OpenAI researcher and Harvard professor, who commented on Sora 2, stating that it is “technically amazing, but it’s premature to congratulate ourselves on avoiding the pitfalls of other social media apps and deepfakes.”

On Friday, Josh Achiam—OpenAI’s head of mission alignment—posted something even more noteworthy regarding Calvin’s accusation. After prefacing his remarks by stating that they were “possibly a risk to my whole career,” Achiam wrote about OpenAI: “We can’t be doing things that make us into a frightening power instead of a virtuous one. We have a duty to and a mission for all of humanity. The bar to pursue that duty is remarkably high.”

It’s worth pausing to consider that. An OpenAI executive publicly questioning whether his company is becoming “a frightening power instead of a virtuous one” carries more weight than a competitor taking jabs or a reporter asking questions. This is an individual who chose to work at OpenAI, who believes in its mission, and who is now acknowledging a crisis of conscience despite the potential professional repercussions.

It’s a defining moment, one whose contradictions may only intensify as OpenAI accelerates toward artificial general intelligence. It also makes me think that the real question isn’t whether Chris Lehane can promote OpenAI’s mission, but whether others—including, crucially, the individuals who work there—still believe in it.