A couple of relatively significant stories broke late last week concerning the — seemingly tenuous — partnership between OpenAI and Microsoft. |
The background: OpenAI first turned to Microsoft back in 2019, after the startup lost access to Elon Musk’s billions. Microsoft — which has now sunk more than $13 billion into the ChatGPT-maker — has developed a partnership with OpenAI, where Microsoft provides the compute (and the money) and OpenAI gives Microsoft access to its generative technology. OpenAI’s tech, for instance, powers Microsoft’s Copilot. |
According to the New York Times, OpenAI CEO Sam Altman last year asked Microsoft for more cash. But Microsoft, concerned about the highly publicized boardroom drama that was rocking the startup, declined. |
- OpenAI recently raised $6.6 billion at a $157 billion valuation. The firm expects to lose around $5 billion this year, and it expects its expenses to skyrocket over the next few years before finally turning a profit in 2029.
- According to the Times, tensions have been steadily mounting between the two companies over issues of compute and tech-sharing; at the same time, OpenAI, focused on securing more computing power and reducing its enormous expense sheet, has been working for the past year to renegotiate the terms of its partnership with the tech giant.
|
Microsoft, meanwhile, has been expanding its portfolio of AI startups, recently bringing the bulk of the Inflection team on board in a $650 million deal. |
Now, the terms of OpenAI’s latest funding round were somewhat unusual. The investment was predicated on an assurance that OpenAI would transition into a fully for-profit corporation. If the company has not done so within two years, investors can ask for their money back. |
According to the Wall Street Journal, an element of the ongoing negotiation between OpenAI and Microsoft has to do with this restructuring, specifically, how Microsoft’s $14 billion investment will transfer into equity in the soon-to-be for-profit company. |
- According to the Journal, both firms have hired investment banks to help advise them on the negotiations; Microsoft is working with Morgan Stanley and OpenAI is working with Goldman Sachs.
- Amid a number of wrinkles — the fact the OpenAI’s non-profit board will still hold equity in the new corporation; the fact that Altman will be granted equity; the risks of anti-trust scrutiny, depending on the amount of equity Microsoft receives — there is another main factor that the two parties are trying to figure out: what governance rights either company will have once the dust settles.
|
Here’s where things get really interesting: OpenAI isn’t a normal company. It’s mission is to build a hypothetical artificial general intelligence, a theoretical technology that is pointedly lacking in any sort of universal definition. The general idea here is that it would possess, at least, human-adjacent cognitive capabilities; some researchers don’t think it’ll ever be possible. |
There’s a clause in OpenAI’s contract with Microsoft that if OpenAI achieves AGI, Microsoft gets cut off. OpenAI’s “board determines when we’ve attained AGI. Again, by AGI we mean a highly autonomous system that outperforms humans at most economically valuable work. Such a system is excluded from IP licenses and other commercial terms with Microsoft, which only apply to pre-AGI technology.” |
To quote from the Times: “the clause was meant to ensure that a company like Microsoft did not misuse this machine of the future, but today, OpenAI executives see it as a path to a better contract, according to a person familiar with the company’s negotiations.” |
|
This is a good example of why the context behind definitions matters so much when discussing anything in this field. There is a definitional problem throughout the field of AI. Many researchers dislike the term “AI” itself; it’s a misnomer — we don’t have an actual artificial intelligence. |
The term “intelligence,” is itself vague and open to the interpretation of the developer in question. |
And the term “AGI” is as formless as it gets. Unlike physics, for example, where gravity is a known, hard, agreed-upon concept, AGI is theoretical, hypothetical science; further, it is a theory that is bounded by resource limitations and massive limitations in understanding around human cognition, sentience, consciousness and intelligence, and how these all fit together physically. |
This doesn’t erase the fact that the labs are trying hard to get there. |
But what this environment could allow for is a misplaced, contextually unstable definition of AGI that OpenAI pens as a ticket either out from under Microsoft’s thumb, or as a means of negotiating the contract of Sam Altman’s dreams. |
In other words, OpenAI saying it has achieved AGI, doesn’t mean that it has. |
As Thomas G. Dietterich, Distinguished Professor Emeritus at Oregon State University said: “I always suspected that the road to achieve AGI was through redefining it.” |