2025-01-31 06:30:33
RT Thomas Wolf
Finally took time to go over Dario's essay on DeepSeek and export control and to be honest it was quite painful to read. And I say this as a great admirer of Anthropic and big user of Claude*
The first half of the essay reads like a lengthy attempt to justify that closed-source models are still significantly ahead of DeepSeek. However, it mostly refers to internal unpublished evals which limit the credit you can give it, and statements like « DeepSeek-V3 is close to SOTA models and stronger on some very narrow tasks » transforming in a general conclusion « DeepSeek-V3 is actually worse than those US frontier models — let’s say by ~2x on the scaling curve » left me generally doubtful. The same applies to the takeaway that all discoveries and efficiency improvements of DeepSeek have been discovered long ago by closed-models companies, this statement mostly resulting from a comparison of DeepSeek openly published $6M training numbers with some vague « few $10M » on Anthropic side without providing much more details. I have no doubts the Anthropic team is extremely talented and I’ve regularly shared how impressed I am with Sonnet 3.5 but this longwinded comparison of open research with vague closed research and undisclosed evals has left me less convinced of their lead than I was before I reading it.
Even more frustrating was the second half of the essay which dive into the US-China race scenario and totally misses the point that the DeepSeek model is open-weights, and largely open-knowledge due to its detailed tech report (and feel free to follow Hugging Face’s open-r1 reproduction project for the remaining non-public part: the synthetic dataset). If both DeepSeek and Anthropic models had been closed source, yes the arm-race interpretation could have make sense but having one of the model freely widely available for download and with detailed scientific report renders the whole « close-source arm-race competition » argument artificial and unconvincing in my opinion.
Here is the thing: open-source knows no border. Both in its usage and its creation.
Every company in the world, be it in Europe, Africa, South-America or the USA can now directly download and use DeepSeek without sending data to a specific country (China for instance) or depending on a specific company or server for running the core part of its technology.
And just like most open-source library in the world are typically built by contributors from all over the world, we’ve already seen several hundred derivative models on the Hugging Face hub created everywhere in the world by teams adapting the original model to their specific use cases and explorations.
What's more, with the open-r1 reproduction and the DeepSeek paper, the coming months will clearly see many open-source reasoning models being released by teams from all over the world. Just today, two other teams, AllenAI in Seattle and Mistral in Paris both independently released open-source base models (Tülu and Small3) which are already challenging the new state-of-the-art (with AllenAI indicating that its Tülu model surpasses the performance of DeepSeek-V3).
And the scope is even much broader than this geographical aspect. Here is the thing we don’t talk nearly enough about: open-source will be more and more essential for our… safety!
As AI becomes central to our lives, resiliency will increasingly become a very important element of this technology. Today we’re dependent on internet access for almost everything. Without access to the internet, we lose all our social media/news feeds, can’t order a taxi, book a restaurant, or reach someone on WhatsApp. Now imagine an alternate world to ours where all the data transiting through the internet would have to go through a single company’s data centers. The day this company suffers a single outage, the whole world would basically stop spinning (picture the recent CrowdStrike outage magnified a millionfold).
Soon, as AI assistants and AI technology permeate our whole life to simplify many of our online and offline tasks, we (and companies using AI) will start to depend more on more on this technology for our daily activities and we will similarly start to find annoying or even painful any downtime in these AI assistants from outages.
The most optimal way to avoid future downtime situations will be to build resilience deep in our technological chain.
Open-source has many advantages like shared training costs, tunability, control, ownership, privacy but one of its most fundamental virtue in the long term –as AI becomes deeply embedded in our world– will likely be its strong resilience. It is one of the most straightforward and cost-effective ways to easily distribute compute across many independent providers and to even run models locally and on device with minimal complexity.
More than national prides and competitions, I think it’s time to start thinking globally about the challenges and social changes that AI will bring everywhere in the world. And open-source technology is likely our most important asset for safely transitioning to a resilient digital future where AI is integrated into all aspects of society.
*Claude is my default LLM for complex coding. I also love its character with hesitations and pondering, like a prelude to the chain-of-thoughts of more recent reasoning models like DeepSeek generations.
2025-01-31 05:27:22
RT Teknium (e/λ)
This is the entire code needed to reproduce R1 lol
Hundreds of Billions of Dollars Later
2025-01-30 22:41:26
就在刚刚,Mistral用它们最喜欢的方式(磁力链接), 发布了一款开源模型 - Mistral Small 3。
模型大小为24B,能力可以与Llama 3.3 70B和ChatGPT 4o-Mini比肩。
当然Mistral也没有忘记在模型发布声明中赞美DeepSeek R1,并暗示开源社区应该把Mistral Small 和R1撮合在一起。
果然开源才是硬道理啊!
2025-01-30 22:33:09
RT Tatiana Tsiguleva
GM!
Blended a few Midjourney sref codes for you 🫶
--sref 1063388544 423556855 3001268358 2543063138 3204801382 3194797783 2501172816 855992152 4023628201 636985248 1828875924 --profile tlsfsfo --sw 500 --stylize 500
2025-01-30 15:28:34
现在大伙都明白为啥Claude这么喜欢封中国人的号了吧🐶
2025-01-30 10:19:55
RT To be, or not to be
“我们这代人唯一的价值观,就是不择手段地赚钱,靠着财富来支撑内心的不安全感,没有让后代敬仰的价值观,既没有贵族的担当,也没有士人的气节,唯一的乐趣和尊严就是:我比你有钱,你比我狼狈!”