Sam Altman’s ouster at OpenAI exposes growing rift in AI industry - The Washington Post
<https://www.washingtonpost.com/technology/2023/11/18/sam-altman-ilya-sutskev...> Sam Altman’s ouster at OpenAI exposes growing rift in AI industry The former CEO was pushing OpenAI to become the premier business in AI. Some employees saw that as a betrayal of their mission By Gerrit De Vynck, Nitasha Tiku and Pranshu Verma SAN FRANCISCO — At noon Friday, Sam Altman logged onto Google Meet and found himself face-to-face with his board of directors. The CEO of the pioneering artificial intelligence company OpenAI had spent the previous day at the exclusive Asia-Pacific Economic Cooperation conference in San Francisco, where he talked up the potential of artificial intelligence and its impact on humanity. The week before, Altman had been on a different stage, announcing OpenAI’s latest product road map and expansion plans. Now, however, Altman learned that he was being fired. According to a post on X by OpenAI co-founder and president Greg Brockman, who quit the company in solidarity with Altman, the news was delivered by Ilya Sutskever, the company’s chief researcher. The power struggle revolved around Altman’s push toward commercializing the company’s rapidly advancing technology versus Sutskever’s concerns about OpenAI’s commitments to safety, according to people familiar with the matter. The schism between Altman and Sutskever mirrors a larger rift in the world of advanced AI, where a race to dominate the market has been accompanied by a near-religious movement to prevent AI from advancing beyond human control. While questions remain about what spurred the board’s decision to oust Altman, growing tensions had become impossible to ignore as Altman rushed to launch products and become the next big technology company. His abrupt and surprising departure leaves OpenAI’s future uncertain, say venture capitalists and AI industry executives. Except for Sutskever, the remaining board members are more closely aligned with a movement to stop existential risks around advanced AI than to scale a business. Silicon Valley funders, meanwhile, are already betting that Altman and Brockman will launch their own AI venture to keep the AI arms race going, eager to invest. “All of a sudden, it’s open season in the AI landscape,” investor Sarah Guo, founder of Conviction AI, posted on X. By Saturday, OpenAI’s investors were already trying to woo Altman back. “Khosla Ventures wants [Altman] back at [OpenAI] but will back him in whatever he does next,” Vinod Khosla, one of the company’s investors, said in a post on X. Altman and Brockman could not be reached for comment. Senior OpenAI executives said they were “completely surprised” and had been speaking with the board to try to understand the decision, according to a memo sent to employees on Saturday by chief operating officer Brad Lightcap that was obtained by The Washington Post. “We still share your concerns about how the process has been handled,” Lightcap said in the memo. “We can say definitively that the board’s decision was not made in response to malfeasance or anything related to our financial, business, safety, or security/privacy practices. This was a breakdown in communication between Sam and the board.” Altman’s ouster also caught rank-and-file employees within OpenAI off-guard, according to a person familiar with internal conversations, speaking on the condition of anonymity to discuss private conversations. The staff is “still processing it,” the person said. In text messages that were shared with The Post, some OpenAI research scientists said Friday afternoon that they had “no idea” Altman was going to be fired, and described being “shocked” by the news. One scientist said they were learning about what happened with Altman’s ouster at the same time as the general public. Over the past year, some OpenAI employees have expressed concerns with Altman’s focus on building consumer products and driving up revenue, which some of those employees saw as at odds with the company’s original mission to develop AI that would benefit all of humanity, a person familiar with employees’ thinking said, speaking on the condition of anonymity. Under Altman, OpenAI had been aggressively hiring product development employees and building up its consumer offerings. Its technology was being used by thousands of start-ups and larger companies to run AI features and products that are already being pitched and sold to customers. During its first-ever developer conference, Altman announced an app-store-like “GPT store” and a plan to share revenue with users who created the best chatbots using OpenAI’s technology, a business model similar to how YouTube gives a cut of ad and subscription money to video creators. To the tech industry, that announcement was viewed as OpenAI wanting to become a major player on its own and not limiting itself to building AI models for other companies. “This is not your standard start-up leadership shake-up. 10,000’s of start-ups are building on OpenAI,” Aaron Levie, CEO of cloud storage company Box said on X.” "This instantly changes the structure of the industry.” OpenAI started as a nonprofit research lab launched in 2015 to safely build superhuman AI and keep it away from corporations and foreign adversaries. Believers in that mission bristled against the company’s transformation into a juggernaut start-up that could become the next big name in Big Tech. Quora CEO Adam D’Angelo, one of OpenAI’s independent board members, told Forbes in January that there was “no outcome where this organization is one of the big five technology companies.” “My hope is that we can do a lot more good for the world than just become another corporation that gets that big,” D’Angelo said in the interview. He did not respond to requests for comment. Two of the board members who voted Altman out worked for think tanks backed by Open Philanthropy, a tech billionaire-backed foundation that supports projects preventing AI from causing catastrophic risk to humanity: Helen Toner, the director of strategy and foundational research grants for Center for Security and Emerging Technology at Georgetown, and Tasha McCauley, whose LinkedIn profile says she began work as an adjunct senior management scientist at Rand Corporation earlier this year. Toner has previously spoken at conferences for a philanthropic movement closely tied to AI safety. McCauley is also involved in the work. Toner occupies the board seat once held by Holden Karnofsky, a former hedge fund executive and CEO of Open Philanthropy, which invested $30 million in OpenAI to gain a board seat and influence the company toward AI safety. Karnofsky, who is married to Anthropic co-founder Daniela Amodei, left the board in 2021 after Amodei and her brother Dario Amodei, who both worked at OpenAI, left to launch Anthropic, an AI start-up more focused on safety. OpenAI’s board had already lost its most powerful outside members in the past several years. Elon Musk stepped down in 2018, with OpenAI saying his departure was to remove a potential conflict of interest as Tesla developed AI technology of its own. LinkedIn co-founder Reid Hoffman, who also sits on Microsoft’s board, stepped down as an OpenAI director in March, citing a conflict of interest after starting a new AI start-up called Inflection AI that could compete with OpenAI. Shivon Zilis, an executive at Musk’s brain-interface company Neuralink and one of his closest lieutenants, also left in March. With the departures of Altman and Brockman, OpenAI is being governed by four members: Toner, McCauley, D’Angelo and Sutskever, who OpenAI paid $1.9 million in 2016 for joining the company as its first research director, according to tax filings. Independent directors don’t hold equity in OpenAI. Sutskever helped create AI software at the University of Toronto, called AlexNet, which classified objects in photographs with more accuracy than any previous software had achieved, laying much of the foundation for the field of computer vision and deep learning. He recently shared a radically different vision for how AI might evolve in the near term. Within five to 10 years, there could be “data centers that are much smarter than people,” Sutskever said on a recent episode of the AI podcast “No Priors.” Not just in terms of memory or knowledge, but with a deeper insight and ability to learn faster than humans. At the bare minimum, Sutskever added, it’s important to work on controlling superintelligence today. “Imprinting onto them a strong desire to be nice and kind to people — because those data centers,” he said, “they will be really quite powerful.” OpenAI has a unique governing structure, which it adopted in 2019. It created a for-profit subsidiary that allowed investors a return on the money they invested into OpenAI, but capped how much they could get back, with the rest flowing back into the company’s nonprofit. The company’s structure also allows OpenAI’s nonprofit board to govern the activities of the for-profit entity, including the power to fire its chief executive. Microsoft, which has invested billions of dollars in OpenAI in exchange for special access to its technology, doesn’t have a board seat. Altman’s ouster was an unexpected and unpleasant surprise, according to a person familiar with internal discussions at the company who spoke on the condition of anonymity to discuss sensitive matters. A Microsoft spokesperson declined to comment on the prospect of Altman returning to the company. On Friday, Microsoft said it was still committed to its partnership with OpenAI. As news of the circumstances around Altman’s ouster began to come out, Silicon Valley circles have turned to anger at OpenAI’s board. “What happened at OpenAI today is a board coup that we have not seen the likes of since 1985 when the then-Apple board pushed out Steve Jobs,” Ron Conway, a longtime venture capitalist who was one of the attendees at OpenAI’s developer conference, said on X. “It is shocking, it is irresponsible, and it does not do right by Sam and Greg or all the builders in OpenAI.”
Buongiorno! uh che bello, sto registrando tutte 'ste nuove puntate per fare una notte bianca quando non avrò un cavolo da fare :-) Se intanto qualcuno vuole spoilerare faccia pure *senza ritegno* che tanto non mi rovina proprio nulla, anzi mi diverte di più. Alberto Cammozzo via nexa <nexa@server-nexa.polito.it> writes:
<https://www.washingtonpost.com/technology/2023/11/18/sam-altman-ilya-sutskev...>
Spolierata: il WAPO e tutti gli altri editori culo-e-camicia con il governo USA sono parte integrante della saga! :-O
Sam Altman’s ouster at OpenAI exposes growing rift in AI industry
No no: è molto, ma molto più incasinato di così. La puntata del 18 finisce con un cliffhanger (avvertimento?!?!) che NON può essere ignorato: «As news of the circumstances around Altman’s ouster began to come out, Silicon Valley circles have turned to anger at OpenAI’s board.» Cioè: il Silicon Valley Circle™ si è infuriato e... [...] Riassunto delle puntate precedenti e successive a quella del 18 Novembre: [1] https://en.wikipedia.org/wiki/Removal_of_Sam_Altman_from_OpenAI (peccato che non ci sia un repo git per quella pagina!)
The schism between Altman and Sutskever mirrors a larger rift in the world of advanced AI,
...e infatti il 20 Novembre esce la notizia che «[Sutskever] publicly apologized for his participation in the board's previous actions. The Verge reported that Altman intends to return to OpenAI with support from Sutskever.» Avvertimento ricevuto.
where a race to dominate the market has been accompanied by a near-religious movement to prevent AI from advancing beyond human control.
Certe cose potrebbero passare inosservate al lettore superficiale, per cui mi permetto di sottolineare lo *spettacolare* ribaltamento della realtà: non sono /loro/ ad aver trasformato il "credo nell'AI" in una religione che profetizza l'avvento de "La Singolarità" che consentirà all'"AI" di avanzare oltre l'intelligenza umana (facendo molto bene o molto male all'umanità, a secondo della confessione religiosa), sono /gli altri/ a far parte di un moviemnto para-religioso per impedire che l'"AI" avanzi oltre il controllo umano. Interessante notare come si possano dire nullità da altezze siderali. [...]
Sutskever helped create AI software at the University of Toronto, called AlexNet [...] He recently shared a radically different vision for how AI might evolve in the near term. Within five to 10 years, there could be “data centers that are much smarter than people,” Sutskever said on a recent episode of the AI podcast “No Priors.” Not just in terms of memory or knowledge, but with a deeper insight and ability to learn faster than humans.
Oh sì, "patapim, patapum!"... proprio una visione radicalmente diversa, molto pacata direi!
At the bare minimum, Sutskever added, it’s important to work on controlling superintelligence today. “Imprinting onto them a strong desire to be nice and kind to people
Già, poi i religiosi sarebbero /gli altri/, vero?!? :-D Beh, sul resto di questo articolo c'è veramente poco da commentare. [...] ...ma dal "riassuntone" su Wikipedia [1] NON devon passare inosservate due cose, in ordine inverso di importanza: 1. Jean-Noël Barrot, ministro francese per la transizione digitale e le telecomunicazioni, si è subito sentito in dovere di twittare pacatamente: «Sam Altman, his team and their talents are welcome in France, where we are accelerating to put artificial intelligence at the service of the common good» [2] 2. Rimane ancora fitto il mistero sui *motivi* per i quali il board ha licenziato Altman, a parte generico «he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities. The board no longer has confidence in his ability to continue leading OpenAI.» [3] 3. Will Hurd [4], /pare/ ora eletto alla camera USA, per nove anni (2000-2009) agente CIA per le operazioni clandestine e precedente membro del board di OpenAI, tra venerdì 17 pomeriggio e domenica 19 pomeriggio è stato (ri)chiamato da un rappresentante dell'azienda per "scavare nei dettagli" della vicenda [5] Ecco, finiamo con questo ulteriore cliffhanger questa ennesima puntata di «Game of AI Thrones» Saluti, 380° [2] https://web.archive.org/web/20231119060443/https://www.politico.eu/article/o... [3] https://web.archive.org/web/20231117210540/https://openai.com/blog/openai-an... [4] https://en.wikipedia.org/wiki/Will_Hurd [5] https://web.archive.org/web/20231121112617/https://www.nytimes.com/2023/11/1... -- 380° (Giovanni Biscuolo public alter ego) «Noi, incompetenti come siamo, non abbiamo alcun titolo per suggerire alcunché» Disinformation flourishes because many people care deeply about injustice but very few check the facts. Ask me about <https://stallmansupport.org>.
participants (2)
-
380° -
Alberto Cammozzo