APD News
Close

APD NewsAPP, New stage!

Click to download

OpenAI shifts from nonprofit to ‘capped-profit’ to attract capital

Science

2019-03-11 22:15

OpenAI may not be quite so open going forward. The former nonprofit announced today that it is restructuring as a “capped-profit” company that cuts returns from investments past a certain point. But some worry that this move — or rather the way they made it — may result in making the innovative company no different from the other AI startups out there.

From now on, profits from any investment in the OpenAI LP (limited partnership, not limited profit) will be passed on to an overarching nonprofit company, which will disburse them as it sees fit. Profits in excess of a 100x return, that is.

In simplified terms, if you invested $10 million today, the profit cap will come into play only after that $10 million has generated $1 billion in returns. You can see why some people are concerned that this structure is “limited” in name only.

In a blog post, OpenAI explained the rationale behind its decision.

We’ll need to invest billions of dollars in upcoming years into large-scale cloud compute, attracting and retaining talented people, and building AI supercomputers.

We want to increase our ability to raise capital while still serving our mission, and no pre-existing legal structure we know of strikes the right balance. Our solution is to create OpenAI LP as a hybrid of a for-profit and nonprofit—which we are calling a “capped-profit” company.

Essentially, the company is admitting that it was unlikely to raise the money necessary to achieve its goals while operating as a nonprofit — which, as you can imagine, investors see no immediate returns on. (Although it’s possible to make money on spinoffs and other sub-businesses, putting money into a nonprofit isn’t really a lucrative move.)

Less money wouldn’t be as big a problem if OpenAI were not competing with the likes of Google and Amazon for specialists in artificial intelligence, cloud computing, and so on. The cost of development is also quite high.

This of course was also true (though perhaps less acute) in 2015 when OpenAI was started. Yet as the founders wrote then:

Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. Since our research is free from financial obligations, we can better focus on a positive human impact.

That doesn’t leave a lot of room for interpretation!

But having said that, OpenAI isn’t the first nonprofit to stumble on the money issue; the simple fact is that it’s hard to outspend global megacorps in a field where success is at least partly determined by budget. And in a way, perhaps they reasoned, isn’t being profitable in a way being “free from financial obligations?” Think about it.

OpenAI’s robotic hand doesn’t need humans to teach it human behaviors

The new structure has OpenAI LP doing the actual work the company is known for: doing interesting and perhaps widely applicable AI research, occasionally withheld in order to save the world.

But the LP will be “governed” (I’ve inquired about the exact meaning of this word in this context) by OpenAI Inc, AKA OpenAI Nonprofit. Profits emerging from the LP in excess of the 100x multiplier go to the nonprofit, which will use it to run educational programs and advocacy work.

The company justifies this rather high profit “cap” by saying that if it succeeds in creating a working artificial general intelligence (AGI is a poorly defined concept that is nonetheless perhaps the holy grail of current AI research), “we expect to generate orders of magnitude more value than we’d owe to people who invest in or work at OpenAI LP.”

OpenAI’s logo.

Whether these are the words of confidence workers, or merely confident ones, is pretty much entirely a matter of opinion. AGI is nowhere near being achieved or the idea even properly understood, as any researcher will tell you, but if it can be achieved it is far more likely be done by people on the leading edge who have access to large budgets and enormous computing resources.

As chief scientist Ilya Sutskever put it in a Reddit comment moments ago: “There is no way of staying at the cutting edge of AI research, let alone building AGI, without us massively increasing our compute investment.” Whatever AGI is, it won’t come cheap.

All the same, the 100x number seems like rather a large jump. Many of the same goals might have been achieved with a 10x or 20x multiplier, which would allow for huge returns without near-term profits appearing to be unlimited in practice. Future rounds will in fact be offered at a smaller multiplier; this one is meant to be a carrot for investors willing to tolerate a bit more risk.

But it has rubbed some the wrong way, and it’s easy to understand grumbling that the company that not long ago said it wanted to be “unconstrained by a need to generate financial return” will now make decisions very much informed by that need. How does that differ from the megacorps with which OpenAI has attempted to contrast itself?

The CEO of the whole shebang is Sam Altman, who stepped down as chairman at Y Combinator just days ago, leading speculation that he was upping his involvement in another concern; now we know which.

Did Sam Altman make YC better or worse?

Policy director for OpenAI (though for which, who can say?) Jack Clark explained in a bit more detail in an email to TechCrunch.

“In practice, You should think about OpenAI as being led on research and technology by Ilya Sutskever (chief scientist) and Greg Brockman (CTO), with Sam helping out on other aspects of management,” he wrote. “We’ve all been working together for a while, so this isn’t much of a shift internally.”

The board consists of OpenAI’s Brockman, Sutskever, and Altman, original investor but non-employee Reid Hoffman, as well as Adam D’Angelo, Holden Karnofsky, Reid Hoffman, Sue Yoon, and Tasha McCauley. Notably Elon Musk isn’t a part of it, though he was a big investor and proponent early on; He departed more than a year back on good terms.

The board is limited to a minority of financially-interested parties, and only non-interested members can vote on “decisions where the interests of limited partners and OpenAI Nonprofit’s mission may conflict,” the announcement noted. So theoretically the keys to the safe are in the hands of those who have no incentive to rifle it. Clark noted that “we’ve been talking to everyone involved for more than a year about this, so everyone was aware.”

OpenAI LP, which we will likely end up just calling OpenAI, will continue its work uninterrupted, it says, even “at increased pace and scale.” So you can expect important papers and work like it has published before, though from now on you will be much more justified in attributing a profit motive to it.