by Matt Marshall — @mmarshall venturebeat — OpenAI’s announcement last night apparently resolved the saga that has beset it for the last five days: It is bringing back Sam Altman as CEO, and it has agreed on three initial board members – and more is to come. However, as more details emerge from sources about what set off the chaos at the company in the first place, it’s clear the company needs to shore up a trust issue that may potentially bedevil Altman as a result of his recent actions at the company.
It’s also not clear how it intends to clean up remaining thorny governance issues, including its board structure and mandate, that have become confusing and even contradictory. For enterprise decision makers, who are watching this saga, and wondering what this all means to them, and about the credibility of OpenAI going forward, it’s worth looking at the details of how we got here. After doing so, here’s where I’ve come out: The outcome, at least as it looks right now, heralds OpenAI’s continued shift toward a more aggressive stance as a product-oriented business. I predict that OpenAI’s position as a serious contender in providing full-service AI products for enterprises, a role that demands trust and optimal safety, may diminish. However, its language models, specifically ChatGPT and GPT-4, will likely remain highly popular among developers and continue to be used as APIs in a wide range of AI products.
More on that in a second, but first a look at the trust factor that hangs over the company, and how it needs to be dealt with. The good news is that the company has made strong headway by appointing some very credible initial board members, Bret Taylor and Lawrence Summers, and putting some strong guardrails in place. The outgoing board has insisted that an investigation be made into Altman’s leadership, and has blocked Altman and his co-founder Greg Brockman’s return to the board, and have insisted that new board members be strong enough to be able to stand up to Altman, according to the New York Times.