Blog
Tags - Feedllama.vim #
A nice AI completion plugin from Georgi Gerganov, the creator of llama.cpp. The completion seems pretty good, but it’s obviously either going to be slower or less accurate than something like Codeium. Of course, the main benefit of llama.vim is being able to generate completions locally, meaning that no private code is accidentally transferred to external servers. While there are already many Vim plugins available for local AI completion, llama.vim’s main advantage is its ease of use through directly integrating with llama.cpp—no configuration is required, and a single command is used to start the completion server. I also like the use of VimL over Lua, since a lot of new plugins are being written solely for Neovim instead of also supporting Vim.
# 2025-01-23 - #ai, #hacker-news, #vimAGI “for Humanity” #
Sam Altman is unreasonably good at convincing officials that his for-profit endeavours are actually net benefits for humanity. It’s pretty clear that he’s disillusioned himself to the true state of capitalism, where creating a larger OpenAI does not automatically translate into better conditions for the general public. He shows this way of thinking in posts like The Intelligence Age, which not-so-subtly converges to the necessity of additional funding (read: needing to creating a for-profit to raise said funding) to “make sure AI isn’t solely controlled by the rich”:
If we want to put AI into the hands of as many people as possible, we need to drive down the cost of compute and make it abundant (which requires lots of energy and chips). If we don’t build enough infrastructure, AI will be a very limited resource that wars get fought over and that becomes mostly a tool for rich people.
This was back in November, an eternity ago in the fast-paced AI scene (remember, AGI was only “a few thousand days” away!), and reaching AGI soon is still very much a speculation. Of course, Sam Altman is still interested in making more money, and still views OpenAI’s expansion as a universal benefit for the American people. Therefore, to him, AGI expansion is the only way forward: a technology that OpenAI will be perpetually close to creating, as the one in charge of defining what constitutes as AGI is Altman himself.
That’s the state in which the Stargate Project finds itself in, with the goal of reaching AGI eventually. But why should OpenAI and co. ever admit to actually creating AGI? The group currently has a nice $500B in funding, support from the U.S. government, and easily maintains the largest share of the AI market. None of this will disappear soon, especially as MGX and SoftBank can easily prioritize funding Altman’s pursuits on the off chance that AGI is actually achieved. And even if Stargate’s money sources somehow disappear, framing the project as a “net positive for all Americans” allows the U.S. government to easily support the endeavour (and grant OpenAI and Altman even more money).
Even if AGI is achieved through the Stargate Project, what would be the resolution? OpenAI claims that the project will create “hundreds of thousands” of American jobs. As pointed out by many Hacker News commenters, this cannot be predicted with any certainty and will likely result similarly to Foxconn’s $10B investment in Wisconsin (forecasted 13000 workers; actually ~1500), leading to the efforts solely benefitting OpenAI and the other private corporations. But even then, from Altman’s viewpoint, the endeavour is still a net positive for humanity: any increase to OpenAI’s valuation is just securing a future with more AI capabilities (which surely will be beneficial to all).
Note that this way of thinking isn’t limited to just Sam Altman: most “tech leaders” are figureheads for their respective companies and their political views and personal opinions should generally not be considered from a utopian point of view.
# 2025-01-22 - #starred, #ai, #openaiThe Most Mario Colors #
This post from Louie Mantia features one of my favorite types of analysis: collecting a lot of data to answer a seemingly obvious question and finding interesting patterns in the process. I hadn’t previously noticed the large variety of letter colors in the different Mario games, with the most common combinations only being present in at most 5 games of the 40 shown. Also of a similar style, I really liked these videos by jan Misali which cover the interesting question of what games are specifically in the Super Mario series (both videos are definitely worth the long durations).
# 2025-01-21 - #gamingDeepSeek-R1 #
DeepSeek’s models have been excellent for their limited resources, and the same holds true with their first-generation reasoning model. DeepSeek-R1 benchmarks similarly to OpenAI’s o1 and—as is the norm with many new AI startups—the model weights are MIT licensed and openly available. The distilled models are also extremely interesting as they allow for reasoning capabilities at model sizes as low as 1.5B, meaning that they can easily be run on-device rather than relying on DeepSeek’s API. R1 seems to be performing pretty well overall, but this Hacker News comment has an amazing transcript of the model contradicting itself on the amount of ’r’s in “strawberry” (as is tradition), so there’s clearly room for future improvements.
# 2025-01-20 - #ai, #hacker-newsIndividual AI’s Environmental Impacts #
While this article misses the recurring environmental impacts from different companies racing to train increasingly better LLMs, it provides a good analysis of the impacts of individual ChatGPT questions. I’ve always noted that promoting individual behavior changes has little impact on preventing environmental damage, and found this quote to be especially good for proving that point:
Getting worried about whether you should use LLMs is as much of a distraction to the real issues involved with climate change as worrying about whether you should stop the YouTube video you’re watching 12 seconds early for the sake of the Earth.
While watching 12 seconds of a YouTube video and asking a ChatGPT question are obviously not able to be directly compared, it’s pretty clear that neither are especially important to worry about in terms of their impact. However, I do agree that the promotion of LLMs will cause further model training which causes more environmental impacts, but hopefully the necessity for training entirely new models will diminish quickly as the relative increase of model performance plateaus.
# 2025-01-18 - #ai, #hacker-news