🚨 Attention shoppers!

Buy & Download

Download immediately after your purchase

QUESTIONs? Help With The Setup


Live Chat


OpenAI's Resurgence: A Defining Chapter with Sam Altman's Return


OpenAI’s recent power struggle and the return of co-founder Sam Altman have marked a defining chapter in the company’s history. With a drastically changed board and a renewed sense of purpose, OpenAI is emerging from the ashes, ready to prove itself once again. However, the challenges ahead are significant. In this blog post, we will explore the implications of Altman’s return, the composition of the new board, and the potential impact on OpenAI’s founding philanthropic aims. Join us as we delve into this pivotal moment for OpenAI and examine what it means for the future of artificial general intelligence.

The highly publicized power struggle within OpenAI, sparked by the dismissal of co-founder Sam Altman, has seemingly concluded, at least for the moment. However, assessing the aftermath prompts contemplation.

The scenario evokes a sense of eulogy, as if OpenAI underwent a transformative event, akin to a rebirth with an uncertain trajectory. Altman’s return, while reinstating him as the driving force, raises questions about its justification. Moreover, the nascent board of directors introduces concerns due to its lack of diversity, exclusively composed of white male members. This development poses a challenge to OpenAI’s original philanthropic vision, suggesting vulnerability to the influence of more capitalist interests. The pivotal question lingers: has OpenAI truly evolved, or is it a remnant of its former self, facing a divergence from its foundational principles?

Certainly, the former iteration of OpenAI was far from flawless.

As of Friday morning, OpenAI’s board consisted of six individuals: Altman, Chief Scientist Ilya Sutskever, President Greg Brockman, tech entrepreneur Tasha McCauley, Quora CEO Adam D’Angelo, and Helen Toner, the director at Georgetown’s Center for Security and Emerging Technologies. Notably, the board was intricately linked to a nonprofit entity, holding a majority stake in OpenAI’s for-profit division and possessing decisive authority over its activities, investments, and overall trajectory.

The distinctive structure of OpenAI, conceived with noble intentions by its co-founders, including Altman, was underpinned by a concise 500-word charter. This document emphasized the board’s responsibility to ensure “that artificial general intelligence benefits all humanity,” leaving the interpretation of this mission to the board members. Interestingly, the terms “profit” and “revenue” found no place in this guiding document. According to reports, Toner once conveyed to Altman’s executive team that triggering OpenAI’s collapse “would actually be consistent with the [nonprofit’s] mission.”

While this arrangement seemed functional within the confines of its unique context, especially during the initial years, the entry of investors and influential partners added a layer of complexity to OpenAI’s dynamics.

Altman’s abrupt termination became a rallying point for Microsoft and OpenAI’s workforce. The board’s decision, made without prior notice to the majority of OpenAI’s 770-person staff, triggered a wave of discontent among the startup’s backers, both in private conversations and public statements.

Satya Nadella, Microsoft’s CEO and a significant collaborator with OpenAI, reportedly expressed intense displeasure upon learning of Altman’s departure. Vinod Khosla, founder of Khosla Ventures and another key backer of OpenAI, explicitly stated on X (formerly Twitter) that the fund desired Altman’s return. Simultaneously, Thrive Capital, which is linked with Khosla Ventures, along with Tiger Global Management and Sequoia Capital, were reportedly contemplating legal action against the board if negotiations over the weekend did not lead to Altman’s reinstatement.

Despite appearances, OpenAI employees seemed aligned with external investors. A notable majority, including Sutskever, who seemingly had a change of heart, signed a letter threatening mass resignation if the board didn’t reverse its decision. However, it’s crucial to consider that these employees had significant stakes in the company’s stability, given potential job offers from major corporations like Microsoft and Salesforce.

Furthermore, OpenAI had engaged in discussions, spearheaded by Thrive, to potentially sell employee shares. This move aimed to elevate the company’s valuation from $29 billion to a range between $80 billion and $90 billion. Altman’s sudden departure and the subsequent rotation of interim CEOs, raising questions about the leadership’s stability, led Thrive to reconsider, casting uncertainty over the planned share sale.



Altman emerged victorious after a five-day battle, but the aftermath raises questions about the toll of this internal conflict. Following intense and suspenseful days, a resolution has been reached. Altman, along with Brockman, who resigned in protest on Friday over the board’s decision, is reinstated. However, Altman’s return is subject to a background investigation into the concerns that led to his initial removal. A new transitionary board has been established, addressing one of Altman’s key demands. OpenAI is set to maintain its existing structure, with the additional provision of capping investors’ profits and granting the board the freedom to make decisions not solely driven by revenue considerations.

Salesforce CEO Marc Benioff took to X to declare that “the good guys” emerged victorious. However, premature conclusions about the ultimate implications of this resolution are cautioned.

Altman emerged victorious in a battle against a board that accused him of lacking consistent candor and prioritizing growth over mission. One notable incident involved his criticism of Toner, co-author of a safety-focused paper, to the extent of attempting to remove her from the board. Another instance saw Altman infuriating Sutskever by hastening the launch of AI-powered features at OpenAI’s inaugural developer conference.

Despite repeated opportunities, the board refrained from providing explanations, citing potential legal challenges. Altman’s dismissal, marked by unnecessary drama, raises questions about the transparency of the process. However, it cannot be dismissed outright that the directors may have had valid reasons based on their interpretation of OpenAI’s humanistic directive.

The newly constituted board, featuring Bret Taylor, D’Angelo, and Larry Summers, signals a potential shift in interpreting OpenAI’s mission. Taylor, a seasoned entrepreneur, and Summers, with extensive business and government connections, bring valuable perspectives. However, the current composition lacks diversity, falling short of reflecting the intended varied viewpoints. Notably, the current lineup violates European mandates, which require companies to reserve at least 40% of their board seats for women candidates. This raises concerns about the board’s ability to represent a diverse range of perspectives, especially considering the legal requirements in Europe.

Why Concerns Mount Over OpenAI's New Board

I am not alone in expressing disappointment; numerous AI academics have voiced their frustrations on X in response to recent developments.

Noah Giansiracusa, a mathematics professor at Bentley University and an authority on social media recommendation algorithms, raises concerns about the board’s all-male composition. He particularly critiques the nomination of Summers, noting his history of making unflattering remarks about women. Giansiracusa highlights the optics of this situation, emphasizing that for a company leading AI development and shaping the world, the lack of diversity is troubling.

He points out the contradiction with OpenAI’s main goal of developing artificial general intelligence for the benefit of all humanity, as recent events do not instill confidence in achieving this mission. Giansiracusa also notes the historical trend of placing women in roles focused on safety in tech, perpetuating the narrative of men receiving credit for innovation and leadership.

Christopher Manning, the director of Sanford’s AI Lab, is slightly more lenient but concurs with Giansiracusa’s assessment. Manning acknowledges that the new OpenAI board may still be incomplete, but he shares concerns about its current membership. Manning points out the lack of individuals with deep knowledge about the responsible use of AI in human society, coupled with a composition comprising only white males. He views this as a less-than-promising start for such a crucial and influential AI company.

The AI industry grapples with widespread inequity, evident from the annotators labeling data for generative AI models to the emergence of harmful biases in trained models, including those developed by OpenAI. While Summers has expressed concern about AI’s potential harm to livelihoods, critics I’ve spoken to find it challenging to believe that the current OpenAI board will consistently prioritize addressing these challenges, especially in comparison to a more diverse board.

This prompts the question: Why didn’t OpenAI consider recruiting well-known AI ethicists like Timnit Gebru or Margaret Mitchell for the initial board? Were they unavailable, did they decline, or did OpenAI not make the effort to reach out? The exact reasons remain uncertain.

Reports suggest that OpenAI contemplated Laurene Powell Jobs and Marissa Mayer for board roles, but their proximity to Altman led to their exclusion. Condoleezza Rice’s name was also considered but ultimately passed over. The absence of such figures raises questions about OpenAI’s commitment to assembling a board that reflects diverse perspectives and expertise in addressing the ethical challenges of AI.

OpenAI stands at a crossroads, offering an opportunity to showcase greater wisdom and global perspective in filling the remaining five board seats — or three, if Altman and a Microsoft executive assume roles, as rumored. Failure to pursue a more diverse path, as highlighted by Daniel Colson, director of the AI Policy Institute, on X, could substantiate his concern that entrusting the responsibility of ensuring responsible AI development to a few individuals or a singular lab may be inherently untrustworthy.



As OpenAI moves forward with Sam Altman back at the helm, the tech world eagerly awaits the company’s next steps. The recent power struggle and board shakeup have raised important questions about the direction and values of OpenAI. While Altman’s return brings a sense of stability, concerns about the lack of diversity on the new board and potential shifts away from OpenAI’s philanthropic goals remain. It is crucial for OpenAI to prioritize responsible AI development and address biases and ethical considerations.

By filling the remaining board seats with individuals who bring diverse perspectives and expertise, OpenAI can demonstrate its commitment to developing AI that benefits all of humanity responsibly. The choices made in these appointments will shape OpenAI’s future and determine its ability to fulfill its mission. With the world watching, OpenAI has a lot to prove, but the potential for positive impact is immense. Let us hope that OpenAI rises to the occasion and leads the way in shaping the future of artificial general intelligence for the benefit of all.

What are your thoughts on Sam Altman’s return to OpenAI and the implications it may have for the company’s direction? How important do you think it is for OpenAI to address biases and ethical considerations in the development of artificial general intelligence? What role do you think OpenAI plays in shaping the future of artificial general intelligence, and what impact do you believe it can have on society? Share your thoughts below.

Leave a Reply

Your email address will not be published. Required fields are marked *