Return to site

Corporate Beef: The Open AI Board vs Sam Altman

By: Sabrina Clarke

· Leadership
broken image

I was under a rock when the news about Sam Altman being fired broke, but it is everywhere!

There are so many angles on this; I, of course, will look at it from an ESG perspective.

In summary, what's happened:

- Sam Altman, former CEO of OpenAi, a non-profit with a commercial arm that is the innovator behind Chat GPT, gets fired by the Open AI Board. The streets say the firing happened on a Teams Call.


- The formal statement is, "Mr. Altman's departure follows a deliberative review process by the Board, which concluded that he was not consistently candid in his communications with the Board, hindering its ability to exercise its responsibilities. The Board no longer has confidence in his ability to continue leading OpenAI." The corporate code for Sam needed to have several seats and know his place.


- The Board appointed Mira Juati, former CTO, as the Interim CEO while searching for a permanent successor.

What's the beef?

Reading between the lines of the formal statement, there was a power struggle between the Board and Sam, and the chess move was to fire Sam.

What does this have to do with ESG?

'S' is for Social, and 'G' is for governance. Let's take the G first. The Board followed a governance process to remove Sam, but was it clear? The data from the process should be private, but the governance process should be transparent. Following a "deliberative process" and "consistently candid" is not sufficient. How are both qualified outside of this issue with Sam, and for whom?

Let's look at the 'S"; the social aspect is where the Board and the Executive leadership team run into further issues. Sam started an AI revolution and is a figurehead that Open AI's employees support. Circa 600 people now threaten to go to Microsoft unless the Board resigns. The formal statement does not imply any intent of change management (although there may be internal), nor does it acknowledge what the transition period could mean for OpenAi (not a good move). We see the fallout and the underestimation on both counts.

The Board has a skills gap; the situation could have been more effectively handled, or the person with the skillset on the Board provided counsel and was ignored. The problem with a gap is it will be filled. Fair play to Microsoft.

I will watch OpenAI's case study unfold like the rest of the world, reminded of the Chappelle Show skit "When keeping it real goes wrong".

If you don't want this to be you, Let's talk.