MoreRSS

site iconHuoJu | 霍炬修改

《图解编程史》作者,移居加拿大。
请复制 RSS 到你的阅读器,或快速订阅到 :

Inoreader Feedly Follow Feedbin Local Reader

HuoJu | 霍炬的 twitter 的 RSS 预览

Re 🤔

2025-03-09 00:05:58

Re 🤔

come on... gemini-2.0

2025-03-09 00:03:03

come on... gemini-2.0

Re @tualatrix @OwenYoungZh 天啊竟然沉浸式翻译是@OwenYoungZh 做的。这是我觉得这一波ai时代里面做的最好最专注的商业产品,我甚至没有加之一。而且这句话我昨...

2025-03-07 01:51:33

Re @tualatrix @OwenYoungZh 天啊竟然沉浸式翻译是@OwenYoungZh 做的。这是我觉得这一波ai时代里面做的最好最专注的商业产品,我甚至没有加之一。而且这句话我昨天还跟好几个人重复了一次,但是真没想到竟然是互fo多年的推友做的!!

RT Qwen: Today, we release QwQ-32B, our new reasoning model with only 32 billion parameters that rivals cutting-edge reasoning model, e.g., DeepSeek-R...

2025-03-06 03:00:44

RT Qwen
Today, we release QwQ-32B, our new reasoning model with only 32 billion parameters that rivals cutting-edge reasoning model, e.g., DeepSeek-R1.

Blog: https://qwenlm.github.io/blog/qwq-32b
HF: https://huggingface.co/Qwen/QwQ-32B
ModelScope: https://modelscope.cn/models/Qwen/QwQ-32B
Demo: https://huggingface.co/spaces/Qwen/QwQ-32B-Demo
Qwen Chat: https://chat.qwen.ai

This time, we investigate recipes for scaling RL and have achieved some impressive results based on our Qwen2.5-32B. We find that RL training con continuously improve the performance especially in math and coding, and we observe that the continous scaling of RL can help a medium-size model achieve competitieve performance against gigantic MoE model. Feel free to chat with our new models and provide us feedback!

Re @WildCat_zh 昨天我看同事也是用这个,我也试试看。但其实这个东西就是。。。cdp包装一下,也许直接ai写一个都行

2025-03-06 01:07:46

Re @WildCat_zh 昨天我看同事也是用这个,我也试试看。但其实这个东西就是。。。cdp包装一下,也许直接ai写一个都行

Re @LeaskH @frankyuyong https://www.instagram.com/katelab_brand/?hl=en 是这个品牌的,但是这件还没上架

2025-03-05 11:34:38

Re @LeaskH @frankyuyong https://www.instagram.com/katelab_brand/?hl=en 是这个品牌的,但是这件还没上架