AI: Corporate Governance – Who’s in Charge?

National Review Online, December 3, 2023

In last week’s Capital Letter, I wrote about the drama at OpenAI, the artificial intelligence company and creator of ChatGPT which has the eventual goal of designing an AGI (artificial general intelligence), an automated system that could reason as well as Homo sapiens, before, presumably, overtaking us all. 

Most companies have (or are meant to have) the primary objective of generating economic return for their shareholders. But shareholders have the right to amend that presumption. OpenAI was originally set up as a non-profit with the overriding objective of ensuring that AGI “benefits all of humanity,” a mission as pretentious as it was (conveniently) subjective. The majority of the board was independent, entrusted with the responsibility of ensuring that the company’s activities did indeed benefit all humanity (whatever that means). In a swipe at the profit motive the founders (many of whom had benefited very handily from the way that it motivates and rewards) was the emphasis put on the fact that “the independent directors do not hold equity in OpenAI.” 

These arrangements did not change, or at least change enough, after the company added a vehicle to attract the “for profit” investors needed to advance AI. Outperforming clever old humanity wouldn’t come cheap, but the independent directors retained their role as custodians of OpenAI’s conscience. The profits payable out of the new “for profit” vehicle were, as I noted last week, capped.

From Open AI’s website:

The for-profit’s equity structure would have caps that limit the maximum financial returns to investors and employees to incentivize them to research, develop, and deploy AGI in a way that balances commerciality with safety and sustainability, rather than focusing on pure profit-maximization.

This nonsense would generally put off investors focused on economic return, but when mania enters the market, the normal rules are cast aside. As a reminder, I’ll requote the words of veteran investor Vinod Khosla, whose firm invested in OpenAI’s “for profit” model. He had dismissed qualms about Open AI’s structure: “If you’re talking about changing the world, who freaking cares?”

When investors talk of “changing the world,” a bubble is (usually) inflating. 

As I noted again last week, even sober Microsoft had not been that bothered by the structure either. It has invested (or committed to invest) up to $13 billion in OpenAI, not exactly chump change. It now owns 49 percent of OpenAI, but, wary of attracting the attention of antitrust enforcers, has had no board representation. Nevertheless, it had taken, shall we say, precautions

Bloomberg (December 1):

Altman often bragged that this board—on which Microsoft didn’t have a seat—would shut down OpenAI if its corporate expansion ever got out of hand. During an interview with Bloomberg Businessweek earlier this year, he joked about “this internet meme that I carry around a button to blow explosive bolts into the data center.” The meme was false, he said, but the general sentiment behind it was true. OpenAI’s board would gladly ignore Microsoft’s wishes in a disagreement over AI safety.

But this idea understated Microsoft’s leverage in any potential dispute. As OpenAI’s primary investor, Microsoft had the right to resell OpenAI’s tech to its corporate customers and held an expansive license to use the startup’s AI models. Microsoft’s Azure cloud computing division also built and houses the supercomputer OpenAI uses to train its models, a critical piece of infrastructure that Microsoft, rather than OpenAI, owns. Microsoft had everything it needed, in other words, to quickly spin up a credible OpenAI clone if things went sideways.

That last element played a key part in Microsoft’s ability to intervene when fighting broke out at OpenAI. The unexpected element was that the dispute that broke out was not between Altman and Microsoft, but between OpenAI’s independent board members and Altman. Altman was fired by the board, but Microsoft rallied behind Altman, and (as discussed last week) hired (or arranged to hire) both him and OpenAI’s president. It also offered jobs to OpenAI engineers who were prepared to defect. The threat of an exodus was too much, and the board capitulated. Altman was back.

Bloomberg:

The result allows Microsoft to more or less go back to business as usual: aggressively adding AI assistants, which Microsoft calls Copilots, to all its software offerings…

These AI assistants don’t come cheap. Any business that wants one for Word and Excel, for example, will pay an additional $30 per user per month, roughly doubling what a typical corporate customer pays for Microsoft’s office suite. At the same time, free and open-source AI assistants are widely available. Microsoft is banking on customers being willing to pay for the productivity gains from Copilot and the convenience of having it baked into such a wide array of software.

But why was Altman fired in the first place? There are a number of theories (some pretty lurid). The most credible (to me) still seems to be that the independent members of the board were worried that Altman’s drive for growth risked undermining the principle of working to benefit “all humanity”, something that they appeared to define primarily in terms of “safety.” But it appears that the definition of “safety” was being distorted by the excessive credibility given to fantasies that AI (or, more specifically, AGI) could represented an existential threat to our species. To be fair, Altman has warned of the potential dangers of AI in not dissimilar terms (but for reasons, that as discussed last week, may owe more to his business strategy than to any dread of disaster). However he did not seem to have been invested enough in “safety” to satisfy board members who over invested in apocalyptic nightmares and underinvested in the company. 

And when the word lurid appears in a report about the tech space, Elon Musk (a co-founder of OpenAI, although long gone now) is normally not far behind.

Lydia Moynihan and Thomas Barrabi in the New York Post (November 30):

[Musk] shared an anonymous, and unverified letter on X from former employees that accuses Altman of “deceit and manipulation.”

Musk also shared a story that reported the board had recently been warned of a breakthrough that could “threaten humanity.” He labeled his post “extremely concerning.” The story said that it was this “unreported letter and AI algorithm” that catalyzed Altman’s removal…

The combination of those concerns – plus having Elon in their ear – may have been enough to push some board members over the edge when it came to ousting Altman, according to some insiders.

In fact, sources say D’Angelo, who is also CEO of social Q&A website Quora, and others on the board felt confident that Musk would support their decision. At the very least, the notion of having Musk and other powerful supporters in their corner may have caused the board to miscalculate the employee and investor uproar that Altman’s firing would cause...

Musk expanded on his apparent concern over [OpenAI chief scientist Ilya] Sutskever’s actions during a bombshell appearance at the New York Times DealBook Summit on Wednesday – telling the crowd that he wanted more information about why the AI scientist had “felt so strongly as to fight Sam.”

“That sounds like a serious thing. I don’t think it was trivial. And I’m quite concerned that there’s some dangerous element of AI that they’ve discovered,” Musk said.

At the same time, Musk insisted that he doesn’t know the actual reason for the Altman ouster.

“I’ve talked to a lot of people… I’ve not found anyone who knows why,” Musk said.

Musk said that following the drama, he reached out to Sutskever but the OpenAI scientist declined to speak with Musk about the situation.

However, he also urged for more transparency about the situation. The reason is “either serious and we should know” … or it was “silly” …

Musk has a potential business motive for stoking dissent at OpenAI. He has been vocal about his concerns that AI may extinguish humanity; nevertheless, he continues to work on his own company xAI.

Many experts have suggested that the dystopian, Terminator-like predictions floated by executives like Musk and Altman are actually an attempt at “regulatory capture.”

By vocally discussing the worst possibilities of an AI future, the experts say, leading executives can help shape regulation and gatekeep the technology from potential rivals.

If I had to guess, “many experts” are on the right track. 

A new “transitional” board has been formed, with only one of the old independents still in place. Microsoft will have a non-voting observer, and the board now includes Bret Taylor, a former co-chief executive of Salesforce, and former Treasury Secretary Larry Summers. These are steps seemingly in the right direction, although this tweet from Summers was reason (perhaps) for some concern:

I am excited and honored to have just been named as an independent director of @OpenAI . I look forward to working with board colleagues and the OpenAI team to advance OpenAI’s extraordinarily important mission.

Maybe “extraordinary and important mission” is just Summers throwing in a bit of hype to be polite (Taylor also has used similar language). If so, that’s fine, but if Summers is suggesting that this company’s “mission” is too important to be entrusted to anything so vulgar as a straightforward corporation run for the benefit of its shareholders that is a cause for concern.

The Daily Telegraph’s Ben Wright is by no means unworried by AI’s potential for risk as well as reward:

Even discounting some of the more outlandish claims made about generative artificial intelligence, OpenAI was developing technology that had the potential to be both extremely valuable while also raising some pretty existential questions.

Nevertheless, he concludes as follows (emphasis added):

In his Stratechery newsletter, technology analyst Ben Thompson writes OpenAI’s melodrama will hopefully debunk the myth that anything but a for-profit corporation is the right way to organise a company.

Even before the recent drama, Thompson says he was more nervous than relieved when he found out Altman doesn’t own any equity in OpenAI: “There is something about making money and answering to shareholders that holds the more messianic impulses in check”.

It’s not that, for want of a phrase, greed is good; it’s just that, as an organising principle, no one’s come up with anything better. 

“Trying to organise incentives by fiat simply doesn’t account for all of the possible scenarios and variables at play in a dynamic situation”, writes Thompson. “Harvesting self-interest has, for good reason, long been the best way to align individuals and companies.”

The solution to OpenAI’s woes therefore seems pretty clear: a healthy dose of good, old-fashioned capitalism.

This is the way.

Once again, this is not to deny that AI may lead to substantial and possibly dangerous disruption in employment patterns. I wrote about that last week, and here, as well, in the more general context of automation, back in 2016. But managing that should be a matter for the democratic process, not corporate boardrooms, remembering all the while that AI cannot be uninvented, and that there will be some countries that will be accelerating their efforts in this area, not applying the brakes.   

So far as the last of those issues is concerned, James Pethokoukis has a cautionary tale to tell, revolving around the decision by the Ottomans to suppress Guttenberg-style movable-type printing presses at least for their Muslim subjects, a ban that stood for more than two centuries. 

Pethokoukis quotes some comments made by the UAE’s Minister of State for Artificial Intelligence, Omar Al Olama, which were reported in Fortune:

“We over-regulated a technology, which was the printing press,” said Al Olama. “It was adopted everywhere on Earth. The Middle East banned it for 200 years. “The calligraphers came to the sultan and said: ‘We’re going to lose our jobs, do something to protect us’—so job loss protection, very similar to AI,” the UAE minister explained. “The religious scholars said people are going to print fake versions of the Quran and corrupt society—misinformation, second reason.” Lastly Al Olama said it was the fear of the unknown that led to this fateful decision. “The top advisors of the sultan said: ‘We actually do not know what this technology is going to do, let us ban it, see what happens to other societies and then reconsider,’” he explained.

Pethokoukis gives a lot more detail in his article (read the whole thing!), but in case readers don’t miss the warning that it is running through it, he refers to work by Anton Howes, a historian of innovation. “According to Howe,” writes Pethokoukis, “some combination of caution, apathy, and ignorance delayed the printing press and helped the West take a technological and economic lead that persists to this day.” 

Pethokoukis:

And it is those same factors — caution, apathy, and ignorance — that could hamper efforts to technologically advance and economically integrate AI in rich countries such as the United States. Policymakers, I fear, don’t fully recognize or understand the potential upsides of AI, while obsessing over potential downsides, including science fictional ones.

But . . . Skynet.

And, yes, these new-fangled printing presses could be used to circulate ideas that were nonsense. 

Pethokoukis:

Nothing new about misinformation. Nothing new about bad ideas gaining currency via new information technology. And nothing new about elites fearing the disruptive potential of new technologies.

If AI/AGI is to be regulated, it should be regulated from outside, through the democratic process, but those who watch that process should remember that regulators have their own agendas independent of any protection of the public. And where there are regulators, there will be rent seekers not far behind. 

Extract from the Capital Letter for the week beginning November 27, 2023