More

        

          

    HomeAutomation/AIReuters reveals yet another twist in the OpenAI tale

    Reuters reveals yet another twist in the OpenAI tale

    -

    The news agency says before CEO Sam Altman was fired, several staff wrote a letter to the board warning of an AI discovery that could “threaten humanity”

    Just when you thought the saga of the firing and reinstatement of Sam Altman, the global face of Generative AI couldn’t get any more convoluted, it has. See the italic text at the bottom for a quick explanation of the story so far.

    Reuters reports that just before the board of non-profit[1] OpenAI fired its CEO and co-founder, Sam Altman, apparently out of the blue, on 17 November, several staff researchers wrote to the board, warning of a new AI discovery. They said it could threaten humanity.

    Grievances

    According to Reuters’ unnamed sources, the letter was another ‘grievance’ that resulted in the board firing Altman. One of the board’s chief concerns, it has emerged, was Altman’s desire to commercialise advances before their implications were properly understood and without keeping the board in the loop about his intentions. Or as the board put it, “not consistently candid in his communications”.

    Apparently OpenAI is not prepared to comment, but has acknowledged, in an internal message to staff, to existence of a project called Q* (pronounced Q star) and that the board had received a letter about the project while Altman was still CEO.

    Some staff thought Q* could be a breakthrough in artificial general intelligence (AGI), which OpenAI defines “as autonomous systems that surpass humans in most economically valuable tasks”.

    Simple sums to destroying humans

    Using vast computing resources, apparently the new model can solve certain mathematical problems. It seems that although the tasks carried out using the model were only at the level of primary school children, researchers believed the potential was immense. So some of them sent the warning letter to the board.

    At the moment, generative AI is good at writing and language translation because it uses statistics to predict the next word and as there are almost always many options, it will get some wrong. However in maths, answers are right or wrong, so if reasoning could be framed in this way, the thinking goes, then AI’s reasoning capabilities could ape human intelligence.

    Researchers think this could be applied to new scientific research because, unlike even the most sophisticated calculator, AGI can generalise (hence the name), learn and ‘comprehend’, sort of.

    While the exact nature of the safety concerns expressed in the letter to the board remains under wraps, human fears of creating AI-powered machines that decide disposing of humans is in their best interests has long been a fear among those developing AI – and some of the rest of us.

    Altman wanted ChatGPT one of the fastest growing software applications in history and attract investment and computing resources to pursue the potential of AGI. In this he succeeded probably beyond anybody’s wildest expectations, but not in line with non-profit’s ideals.

    I subscribe to the idea that a catastrophe with AI will be the result of unintended consequences and, ironically, as a number of pundits have pointed out, the same has happened here. Board member, co-founder and chief data scientist of OpenAI, Ilya Sutskever, was instrumental in Altman being fired to protect the non-profit principles of the company.

    Instead, you don’t need to bet $13 billion that corporate governance is in for a seriously big revision.

    The story so far…

    Events leading up to the firing

    According to the New York Times, there had been considerable tension in the boardroom with Altman trying to push out Helen Toner who had co-authored a paper he thought was overly critical of OpenAI. There has also been disagreement about who should fill board vacancies.

    The letter was sent to the board warning about the dangerous potential of the Q* project.

    Friday 10 November

    “Which would you have more confidence in? Getting your technology from a non-profit, or a for-profit company that is entirely controlled by one human being?” asked Brad Smith, president of Microsoft, at a conference in Paris, quoted by The Economist. This praising of OpenAI’s unusual organisational structure was also a side-swipe Meta, which is controlled by Mark Zuckerberg.

    However, this unusual structure was meant despite its $13 billion investment in OpenAI, Microsoft had no say and was not consulted about Altman’s firing just a week later.

    Thursday 16 November

    “Four times now in the history of OpenAI, the most recent time was just in the last couple weeks, I’ve gotten to be in the room, when we sort of push the veil of ignorance back and the frontier of discovery forward, and getting to do that is the professional honor of a lifetime,” Altman said at the Asia-Pacific Economic Cooperation summit whose audience included some world leaders.

    Friday 17 November

    The board of OpenAI fired its talisman, CEO and co-founder, Altman, on Friday 17 November, apparently out of the blue. The non-profit board members were Adam D’Angelo, Helen Toner, Tasha McCauley and Sutskever.

    CTO Mira Murati stepped in as interim CEO. The non-profit’s board was more concerned with safety than commercialisation, whereas it seemed to them, their CEO’s priorities were the other way round.

    A huge backlash followed. Microsoft reportedly had pumped $13 billion into the start-up and its CEO, Satya Nadella was completely blindsided by the move and furious, by all accounts. No doubt so were Microsoft’s shareholders.

    The board also demoted OpenAI’s President, Greg Brockman, who resigned in protest at Altman’s exit.

    Sunday 18 November

    Microsoft said it would set up its own AI division with Altman at the helm and Brockman alongside him.

    Monday, 19 November

    More than 750 OpenAI staff sent a letter to the board saying that unless Altman was reinstated, they would quit and join Microsoft. One of the signatories apparently was board member Ilya Sutskever.

    Also on Monday, OpenAI appointed Twitch’s co-founder Emmett Shear as the new interim CEO. It didn’t take long for some embarrassing Tweets sent by Shear to resurface with unsavoury comments Nazism and rape/non-consent fantasies plus writing, when an intern at Microsoft, “Every paycheck felt like I was getting the payment for a little chunk of my soul in the mail”.

    Wednesday 21 November

    Altman and OpenAI reach an agreement in principle for him to return as CEO and an ‘initial’ new board is put in place.


    [1] OpenAI’s unusual organisational structure: It was founded as a non-profit in 2015 by Altman and a group of Silicon Valley investors and entrepreneurs including Elon Musk which pledged $1 billion towards OpenAI’s goal of building artificial general intelligence (AGI) that could outperform humans on many intellectual tasks.

    It became clear the firm needed cash to pay for computing capacity, which is expensive, and the most talented people, and only a fraction of the pledged amount had been paid.

    To stay true to its ideals, investor’s profits were capped at 100 times their investment, but that will change to 20% annually from 2025. Profits above the cap were to go to the parent non-profit organisation. The company reserved the right to reinvest all profit back into the firm until its goal of creating AGI is achieved.

    Once the goal is reached, the AGI is not intended to generate a financial return.