Around one year ago, the company OpenAI released a version of their ChatGPT product that captured the imagination of the public all around the world. For many, it was the first time they had been exposed to the sophistication of deep learning (DL) AI in the form of a generative AI (GenAI) based on a large language model (LLM) delivered through a chatbot-style text input interface. Virtually any question asked would return an adaptive, detailed, and human-like response.
In the technology industry, when society gravitates so strongly towards a particular use case, it is often referred to as a “killer app”. In this instance, generative AI driving a chatbot interface quickly became one and launched a highly competitive software industry “arms race” throughout 2023, where the dialogue around this technology became distilled down to “The answer is GenAI, now what is the question?”.
Whilst describing the GenAI technology market and the adoption of it by organisations as nascent could be considered an understatement, it should be understood that the perception of the opportunity it provides is accelerating its adoption maturity and growth at a rate far faster than the majority of technologies before it. However, understandably, with such a new technology, its impact on society as a whole, including how it is used within organisations, has yet to be fully understood. This leads to a significant demand for caution by means of technology guardrails and government regulations, amongst other approaches, to drive an ethical use of AI across society.
The fluidity of leadership
So, when reflecting upon the year-long journey of GenAI to this point, it would have been unimaginable to think that OpenAI would be in a position where its board would sack its CEO. As of Friday, November 17th, this, though, is what has happened in events that played out over the weekend and subsequent week as if an episode of the comedy satire Silicon Valley was still in production whilst taking inspiration from the recently finished show Succession.
On November 17th, OpenAI posted a blog
that within it stated:
“Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities. The board no longer has confidence in his ability to continue leading OpenAI.”
Since this dramatic turn of events, a variety of reports have speculated on the cause and observed how the drama has unfolded. The removal of Sam Altman saw the following events occur:
- OpenAI president Greg Brockman quit in support of Altman.
- Microsoft, and more specifically Satya Nadella, reportedly intervened on Altman’s behalf. This included offering Microsoft roles to Sam, Greg, and any unsatisfied employee of OpenAI.
- CTO Mira Murati was appointed interim CEO before Emmett Shear (Twitch) was appointed by the then board on November 19th as interim CEO due to concerns Murati had about Altman’s removal.
- 700+ employees grouped together, writing to the OpenAI board that they would resign on mass if Sam Altman were not reinstated.
- OpenAI investors were also actively seeking the reinstatement of Altman, though, as reported, this would likely be inconsequential because the not-for-profit parent organisation has ultimate control.
- Then, after five days, it was reported that Sam Altman and Greg Brockman would return to their prior roles.
- As part of the return, a new board has been agreed upon, with only Adam D’Angelo being retained from the prior board, and an investigation will occur by an independent law firm.
- OpenAI’s Chief Scientist Ilya Sutskever, who reportedly played a key role in Sam Altman’s removal, publicly expressed deep regret in his participation.
The calm after the storm
Upon reflecting on the above, it is hard to really identify what was at the root of this drama that moved at such a rapid pace. It is unlikely, given all the corporate players involved, that the matters at the centre of this will ever be fully exposed. More likely, a thorough investigation and a very carefully written report by the independent law firm will address the elements that have become public in a sensitive manner to limit any ongoing reputational damage to OpenAI and return the perception of strong leadership and a clear strategic direction.
What is unlikely ever to be understood is whether concerns reportedly raised by Chief Scientist Ilya Sutskever regarding the societal and ethical implications of GenAI and other models being worked on were a leading factor that led to this situation, as the company’s growing drive for profits may have suppressed these concerns.
From PAC’s perspective, GenAI is the atypical “pandora’s box” situation as it has been opened, so to speak, and given the publicity it has had, it will, over time, rapidly change how we all work and engage with the world. The question PAC feels should be asked is, “Are companies like OpenAI responsible stewards of this technology, given their profit motivations, and if not, who and what must be put in place to support this societal transition in the most reasonably ethical way that does not hinder the value of its use?”
Image by vecstock on Freepik