#openai
OpenAI Has Been on the Wrong Side of History #
Sam Altman, in response to a request to release model weights:
yes, we are discussing. i personally think we have been on the wrong side of history here and need to figure out a different open source strategy; not everyone at openai shares this view, and it’s also not our current highest priority.
I saw this comment on Slashdot yesterday, but didn’t realize that it came from Altman’s personal Reddit account. I previously assumed that Altman said this in a conversation with some politician, where absolutely nothing could be taken literally without considering OpenAI’s motives.
I still don’t think that this Reddit comment can be really considered as OpenAI going back to open source, especially since Altman has so much leverage in terms of the company’s plans and could have previously attempted to pivot to a more open company. This discrepancy is especially obvious even when comparing OpenAI to Anthropic, who doesn’t publish open model weights but at least attempts to make parts of their technology more accessible for usage without a $200/month subscription.
This comment is obviously catered towards the audience who thinks that OpenAI is going to implode because of DeepSeek, but there’s not really any advantage for OpenAI to open source their stack since the company is well established compared to the Chinese AI lab. I would guess that the main reasoning for this comment is just to give investors something to work with when considering OpenAI’s status compared to other AI companies (of course, primarily DeepSeek), so I wouldn’t derive any real meaning out of it.
# 2025-02-02 - #ai, #openai, #simon-willisonAGI “for Humanity” #
Sam Altman is unreasonably good at convincing officials that his for-profit endeavours are actually net benefits for humanity. It’s pretty clear that he’s disillusioned himself to the true state of capitalism, where creating a larger OpenAI does not automatically translate into better conditions for the general public. He shows this way of thinking in posts like The Intelligence Age, which not-so-subtly converges to the necessity of additional funding (read: needing to creating a for-profit to raise said funding) to “make sure AI isn’t solely controlled by the rich”:
If we want to put AI into the hands of as many people as possible, we need to drive down the cost of compute and make it abundant (which requires lots of energy and chips). If we don’t build enough infrastructure, AI will be a very limited resource that wars get fought over and that becomes mostly a tool for rich people.
This was back in November, an eternity ago in the fast-paced AI scene (remember, AGI was only “a few thousand days” away!), and reaching AGI soon is still very much a speculation. Of course, Sam Altman is still interested in making more money, and still views OpenAI’s expansion as a universal benefit for the American people. Therefore, to him, AGI expansion is the only way forward: a technology that OpenAI will be perpetually close to creating, as the one in charge of defining what constitutes as AGI is Altman himself.
That’s the state in which the Stargate Project finds itself in, with the goal of reaching AGI eventually. But why should OpenAI and co. ever admit to actually creating AGI? The group currently has a nice $500B in funding, support from the U.S. government, and easily maintains the largest share of the AI market. None of this will disappear soon, especially as MGX and SoftBank can easily prioritize funding Altman’s pursuits on the off chance that AGI is actually achieved. And even if Stargate’s money sources somehow disappear, framing the project as a “net positive for all Americans” allows the U.S. government to easily support the endeavour (and grant OpenAI and Altman even more money).
Even if AGI is achieved through the Stargate Project, what would be the resolution? OpenAI claims that the project will create “hundreds of thousands” of American jobs. As pointed out by many Hacker News commenters, this cannot be predicted with any certainty and will likely result similarly to Foxconn’s $10B investment in Wisconsin (forecasted 13000 workers; actually ~1500), leading to the efforts solely benefitting OpenAI and the other private corporations. But even then, from Altman’s viewpoint, the endeavour is still a net positive for humanity: any increase to OpenAI’s valuation is just securing a future with more AI capabilities (which surely will be beneficial to all).
Note that this way of thinking isn’t limited to just Sam Altman: most “tech leaders” are figureheads for their respective companies and their political views and personal opinions should generally not be considered from a utopian point of view.
# 2025-01-22 - #starred, #ai, #openaio1 Thinking In Chinese #
While I haven’t personally encountered it in a while, ChatGPT chats used to rarely autogenerate titles in Spanish instead of English. I assume that o1 thinking in Chinese occurs for a similar set of reasons, since both seem related to the multilingual datasets that the models are trained on. As an aside, I’m curious if Chinese makes up a significantly higher percentage of the training data for Chinese LLMs like DeepSeek, since both English and Chinese already make up the significant part of the corpuses for English models.
# 2025-01-15 - #ai, #openai, #slashdot